iSCSI performance, vs. NFS, versus rsync - Setup

This is a discussion on iSCSI performance, vs. NFS, versus rsync - Setup ; I'm dealing with an rsnapshot service, used to backup up a number of systems to a user-friendly daily snapshot. I'm looking at additional storage as the backup requirements grow, and have been looking at an iSCSI and NFS applicance server. ...

+ Reply to Thread
Results 1 to 6 of 6

Thread: iSCSI performance, vs. NFS, versus rsync

  1. iSCSI performance, vs. NFS, versus rsync

    I'm dealing with an rsnapshot service, used to backup up a number of systems
    to a user-friendly daily snapshot. I'm looking at additional storage as the
    backup requirements grow, and have been looking at an iSCSI and NFS applicance
    server. When transferring data to the mounted iSCSI device in rsync
    operations, I'm seeing only 2 MBytes/second.

    Is this typical? If not, does anyone here have rough specs of data transfer
    over iSCSI? I've asked the vendor, *repeatedly* for any kind of
    customer-reported stats, and they seem to have found a mantra of 'it depends
    on local configuration', and are completely unwilling to even give a hint, nor
    do their brochures say expected transfer speeds that I've been able to find.

    Has anyone here worked with it enough to estimate such bandwidth for me? Or
    tell me how many distinct iSCSI devices I can typically export from the same
    server before overloading it?

  2. Re: iSCSI performance, vs. NFS, versus rsync

    On Fri, 22 Aug 2008 23:16:21 +0100, Nico Kadel-Garcia wrote:

    > Is this typical? If not, does anyone here have rough specs of data
    > transfer over iSCSI?
    > Has anyone here worked with it enough to estimate such bandwidth for me?
    > Or tell me how many distinct iSCSI devices I can typically export from
    > the same server before overloading it?


    Well I hate to tell you, but, it depends on the local configuration.

    I've done a few iSCSI setups - and here's a typical configuration that
    I've used:

    iSCSI box running deb/bsd/opensolaris/whathaveyou - RAID10 array using
    3gb/s SAS drives, LSI SAS RAID controller - connected to LAN with 1gbit
    Ethernet (copper) -- I've noticed an average read/write transfer rate to
    be around 75-100MB/s

    The most iSCSI devices I've hosted on one box is 4. Of course, if doing
    heavy I/O on one device - your throughput will decrease on the other
    devices.

    Things that affect your iSCSI throughput:
    1. The network link speed (10/100/1000/10,000/etc)
    2. Drive technology used (IDE/U320/SAS/SATA/blah/blah), drive transfer
    rate, etc.
    3. Drive controller technology and bus speed.

    Hope that helps.



  3. Re: iSCSI performance, vs. NFS, versus rsync

    On Fri, 22 Aug 2008 23:16:21 +0100, Nico Kadel-Garcia wrote:

    > [quoted text muted]


    Well I hate to tell you, but, it depends on the local configuration.

    I've done a few iSCSI setups - and here's a typical configuration that
    I've used:

    iSCSI box running deb/bsd/opensolaris/whathaveyou - RAID10 array using
    3gb/s SAS drives, LSI SAS RAID controller - connected to LAN with 1gbit
    Ethernet (copper) -- I've noticed an average read/write transfer rate to
    be around 75-100MB/s

    The most iSCSI devices I've hosted on one box is 4. Of course, if doing
    heavy I/O on one device - your throughput will decrease on the other
    devices.

    Things that affect your iSCSI throughput:
    1. The network link speed (10/100/1000/10,000/etc)
    2. Drive technology used (IDE/U320/SAS/SATA/blah/blah), drive transfer
    rate, etc.
    3. Drive controller technology and bus speed.

    Hope that helps.

  4. Re: iSCSI performance, vs. NFS, versus rsync

    Mike Bleiweiss wrote:
    > On Fri, 22 Aug 2008 23:16:21 +0100, Nico Kadel-Garcia wrote:
    >
    >> [quoted text muted]

    >
    > Well I hate to tell you, but, it depends on the local configuration.


    Well, yes.

    > I've done a few iSCSI setups - and here's a typical configuration that
    > I've used:
    >
    > iSCSI box running deb/bsd/opensolaris/whathaveyou - RAID10 array using
    > 3gb/s SAS drives, LSI SAS RAID controller - connected to LAN with 1gbit
    > Ethernet (copper) -- I've noticed an average read/write transfer rate to
    > be around 75-100MB/s


    THANK YOU! An order of magnitude is what I needed.

    > The most iSCSI devices I've hosted on one box is 4. Of course, if doing
    > heavy I/O on one device - your throughput will decrease on the other
    > devices.
    >
    > Things that affect your iSCSI throughput:
    > 1. The network link speed (10/100/1000/10,000/etc)
    > 2. Drive technology used (IDE/U320/SAS/SATA/blah/blah), drive transfer
    > rate, etc.
    > 3. Drive controller technology and bus speed.
    >
    > Hope that helps.


    Yes, it does. Yours sounds like a fairly optimal setup.

  5. Re: iSCSI performance, vs. NFS, versus rsync

    Nico Kadel-Garcia wrote:
    > I'm dealing with an rsnapshot service, used to backup up a number of
    > systems to a user-friendly daily snapshot. I'm looking at additional
    > storage as the backup requirements grow, and have been looking at an
    > iSCSI and NFS applicance server. When transferring data to the mounted
    > iSCSI device in rsync operations, I'm seeing only 2 MBytes/second.


    That's VERY slow.

    >
    > Is this typical?


    No. Not even close. However, it can depend on how you're
    measuring. For a large data space rsync will spin its wheels for
    a while figuring what to transfer. It's "possible" that you
    could get a poor overall time (but VERY VERY unlikely). Also
    very much depends on your network speed (and the plethora
    of variables with networking in general).

    > If not, does anyone here have rough specs of data
    > transfer over iSCSI? I've asked the vendor, *repeatedly* for any kind of
    > customer-reported stats, and they seem to have found a mantra of 'it
    > depends on local configuration', and are completely unwilling to even
    > give a hint, nor do their brochures say expected transfer speeds that
    > I've been able to find.


    iSCSI over gigabit I would think would easily get you between 50-70MBytes/sec.
    With proper toe on the cards, might go even higher. Also some will
    consider up/down... so might double that value (100-140MBytes/sec). But
    the vendor is right... it really does depend on the network, quality of cards,
    even the system bus speed used by the NIC.

    Even a poorly configured iSCSI over gigabit will likely get you
    30-50MBytes/sec.... but I suppose it could be REALLY messed up
    and get even worse.

    If you have a dedicated network for storage, you can try running
    larger frame sizes to see if it helps.

    You'll get even better performance with more NICs... again, speaking
    total aggregate (handles more load at the higher speeds).

    >
    > Has anyone here worked with it enough to estimate such bandwidth for me?
    > Or tell me how many distinct iSCSI devices I can typically export from
    > the same server before overloading it?


    The value stated above would be the aggregate values for all transfers
    (assumes one 1 Gbit ethernet connection).

    You can expect about 30-40MBytes/sec with NFS (NAS) on a good gigabit
    network.

    If we're talking 100mbit... then suddenly your value of 2MBytes/sec (albeit
    slow) is not out of the realm of possibility for such a slow network.
    I'd usually consider 5MBytes/sec to be very slow with 100mbit though.
    So... still too slow IMHO.

  6. Re: iSCSI performance, vs. NFS, versus rsync

    Chris Cox wrote:
    > Nico Kadel-Garcia wrote:
    >> I'm dealing with an rsnapshot service, used to backup up a number of
    >> systems to a user-friendly daily snapshot. I'm looking at additional
    >> storage as the backup requirements grow, and have been looking at an
    >> iSCSI and NFS applicance server. When transferring data to the mounted
    >> iSCSI device in rsync operations, I'm seeing only 2 MBytes/second.

    >
    > That's VERY slow.
    >
    >>
    >> Is this typical?

    >
    > No. Not even close. However, it can depend on how you're
    > measuring. For a large data space rsync will spin its wheels for
    > a while figuring what to transfer. It's "possible" that you
    > could get a poor overall time (but VERY VERY unlikely). Also
    > very much depends on your network speed (and the plethora
    > of variables with networking in general).
    >
    > > If not, does anyone here have rough specs of data
    >> transfer over iSCSI? I've asked the vendor, *repeatedly* for any kind
    >> of customer-reported stats, and they seem to have found a mantra of
    >> 'it depends on local configuration', and are completely unwilling to
    >> even give a hint, nor do their brochures say expected transfer speeds
    >> that I've been able to find.

    >
    > iSCSI over gigabit I would think would easily get you between
    > 50-70MBytes/sec.
    > With proper toe on the cards, might go even higher. Also some will
    > consider up/down... so might double that value (100-140MBytes/sec). But
    > the vendor is right... it really does depend on the network, quality of
    > cards,
    > even the system bus speed used by the NIC.


    *THANKS*. That's what I needed.



    > Even a poorly configured iSCSI over gigabit will likely get you
    > 30-50MBytes/sec.... but I suppose it could be REALLY messed up
    > and get even worse.
    >
    > If you have a dedicated network for storage, you can try running
    > larger frame sizes to see if it helps.
    >
    > You'll get even better performance with more NICs... again, speaking
    > total aggregate (handles more load at the higher speeds).
    >
    >>
    >> Has anyone here worked with it enough to estimate such bandwidth for
    >> me? Or tell me how many distinct iSCSI devices I can typically export
    >> from the same server before overloading it?

    >
    > The value stated above would be the aggregate values for all transfers
    > (assumes one 1 Gbit ethernet connection).
    >
    > You can expect about 30-40MBytes/sec with NFS (NAS) on a good gigabit
    > network.
    >
    > If we're talking 100mbit... then suddenly your value of 2MBytes/sec (albeit
    > slow) is not out of the realm of possibility for such a slow network.
    > I'd usually consider 5MBytes/sec to be very slow with 100mbit though.
    > So... still too slow IMHO.


+ Reply to Thread