iSCSI performance problem - Storage

This is a discussion on iSCSI performance problem - Storage ; Hello We have purchased a IBM iSCSI SAN but we are experiencing very poor performance from it. The iSCSI SAN and network runs 1 gigabit but we only get about 7-8% thoughput. I noticed this section in the iSCSI 2.04 ...

+ Reply to Thread
Results 1 to 15 of 15

Thread: iSCSI performance problem

  1. iSCSI performance problem

    Hello

    We have purchased a IBM iSCSI SAN but we are experiencing very poor
    performance from it. The iSCSI SAN and network runs 1 gigabit but we
    only get
    about 7-8% thoughput.

    I noticed this section in the iSCSI 2.04 userguide, and I am wondering
    if
    this can really be true?

    -----
    Windows does not support disks that have been formatted to anything
    other
    than a 512byte block size. Block size refers to the low level
    formatting of
    the disk and not the cluster or allocation size used by NTFS. Be aware
    that
    using a disk with a block size larger than 512 bytes will cause
    applications
    not to function correctly. You should check with your iSCSI target
    manufacture to ensure that their default block size is set to 512
    bytes or
    problems will likely occur.
    ---

    Our iSCSI SAn can only format the arrays I create at 16kb, 32kb,
    64kb,
    128kb, 512kb and 1024kb.


  2. Re: iSCSI performance problem


    wrote in message
    news:1181642766.682967.24160@j4g2000prf.googlegrou ps.com...
    > Hello
    >
    > We have purchased a IBM iSCSI SAN but we are experiencing very poor
    > performance from it. The iSCSI SAN and network runs 1 gigabit but we
    > only get
    > about 7-8% thoughput.
    >
    > I noticed this section in the iSCSI 2.04 userguide, and I am wondering
    > if
    > this can really be true?
    >
    > -----
    > Windows does not support disks that have been formatted to anything
    > other
    > than a 512byte block size. Block size refers to the low level
    > formatting of
    > the disk and not the cluster or allocation size used by NTFS. Be aware
    > that
    > using a disk with a block size larger than 512 bytes will cause
    > applications
    > not to function correctly. You should check with your iSCSI target
    > manufacture to ensure that their default block size is set to 512
    > bytes or
    > problems will likely occur.
    > ---
    >
    > Our iSCSI SAn can only format the arrays I create at 16kb, 32kb,
    > 64kb,
    > 128kb, 512kb and 1024kb.
    >


    The sizes you are listing are more than likely the stripe sizes, not the
    block (or sector) size of the actual disks, or LUN's.

    Regardless, I would contact your Storage Vendor (IBM) support for help with
    troubleshooting this performance issue for you.

    rgds,
    Edwin.



  3. Re: iSCSI performance problem


    Hi

    zakkuto@gmail.com wrote:
    > Hello
    >
    > We have purchased a IBM iSCSI SAN but we are experiencing very poor
    > performance from it. The iSCSI SAN and network runs 1 gigabit but we
    > only get
    > about 7-8% thoughput.


    How do you measure the performance of the SAN? How do you generate the load?

    Regards,

    Kaj

  4. Re: iSCSI performance problem

    > > We have purchased a IBM iSCSI SAN but we are experiencing very poor
    > > performance from it. The iSCSI SAN and network runs 1 gigabit but we
    > > only get
    > > about 7-8% thoughput.

    >
    > How do you measure the performance of the SAN? How do you generate the load?


    By copying large files from the local C: drive til the iSCSI attached
    drive letters, for example d:\
    When monitoring with task manager, on the network tab, i see 8%
    utilization of the iSCSI NIC with small peaks to about 12%.

    I dont know other tools to measure file copying performance.

    Soren


  5. Re: iSCSI performance problem

    zakkuto@gmail.com wrote:
    >> How do you measure the performance of the SAN? How do you generate the load?

    >
    > By copying large files from the local C: drive til the iSCSI attached
    > drive letters, for example d:\
    > When monitoring with task manager, on the network tab, i see 8%
    > utilization of the iSCSI NIC with small peaks to about 12%.
    >
    > I dont know other tools to measure file copying performance.


    The tool I prefer for testing the performance of storage is IOmeter. It
    works best if you can point it to a raw unformatted partition on your
    storage. You can try that, and post your results (including how you
    tested) and the configuration of your storage, then we can probably say
    if it's worse than expected.

    You could try with some continuous read/write first.

    Regards,

    Kaj

  6. Re: iSCSI performance problem

    First of all try to tune network itself. Here are some recommended
    settings you need to apply on the initiator side:

    http://www.rocketdivision.com/forum/viewtopic.php?t=792

    after them you should able to get wire speed on raw TCP traffic
    (tested with Nttcp and IPerf).

    Make sure your both target and initator are configured to have the
    same Jumbo frame size assigned (9K works the best from what we've
    tried) and intermediate hardware (routers) don't break Jumbo frames.
    I'd even try to connect initiator to the target with the cross-over
    cable to isolate the issue.

    Then get fast PC, install software iSCSI target wire speed capable
    (DataCore SANmelody or RDS StarWind) and use initiator to map it's
    storage. If you get wire speed with I/O meter (32KB+ blocks) then your
    both network hardware/topology and initiator/client PC are OK and you
    can call IBM support asking "What the **** is going on?!?"

    -anton

    zakk...@gmail.com wrote:
    > Hello
    >
    > We have purchased a IBM iSCSI SAN but we are experiencing very poor
    > performance from it. The iSCSI SAN and network runs 1 gigabit but we
    > only get
    > about 7-8% thoughput.
    >
    > I noticed this section in the iSCSI 2.04 userguide, and I am wondering
    > if
    > this can really be true?
    >
    > -----
    > Windows does not support disks that have been formatted to anything
    > other
    > than a 512byte block size. Block size refers to the low level
    > formatting of
    > the disk and not the cluster or allocation size used by NTFS. Be aware
    > that
    > using a disk with a block size larger than 512 bytes will cause
    > applications
    > not to function correctly. You should check with your iSCSI target
    > manufacture to ensure that their default block size is set to 512
    > bytes or
    > problems will likely occur.
    > ---
    >
    > Our iSCSI SAn can only format the arrays I create at 16kb, 32kb,
    > 64kb,
    > 128kb, 512kb and 1024kb.



  7. Re: iSCSI performance problem

    Which model IBM SAN?


    wrote in message
    news:1181649606.344712.65580@z28g2000prd.googlegro ups.com...
    >> > We have purchased a IBM iSCSI SAN but we are experiencing very poor
    >> > performance from it. The iSCSI SAN and network runs 1 gigabit but we
    >> > only get
    >> > about 7-8% thoughput.

    >>
    >> How do you measure the performance of the SAN? How do you generate the
    >> load?

    >
    > By copying large files from the local C: drive til the iSCSI attached
    > drive letters, for example d:\
    > When monitoring with task manager, on the network tab, i see 8%
    > utilization of the iSCSI NIC with small peaks to about 12%.
    >
    > I dont know other tools to measure file copying performance.
    >
    > Soren
    >




  8. Re: iSCSI performance problem

    Hello Anton

    Thanks for your reply.

    > First of all try to tune network itself. Here are some recommended
    > settings you need to apply on the initiator side:
    > http://www.rocketdivision.com/forum/viewtopic.php?t=792


    I will try to create the mentioned regkeys. The article says to modyfy
    the keys, but at my fileservers the keys dont exist, so I guess I will
    just create them:

    1) GlobalMaxTcpWindowSize = 0x01400000 (DWORD)
    2) TcpWindowSize = 0x01400000 (DWORD)
    3) Tcp1323Opts = 3 (DWORD)
    4) SackOpts = 1 (DWORD)

    > Make sure your both target and initator are configured to have the
    > same Jumbo frame size assigned (9K works the best from what we've
    > tried) and intermediate hardware (routers) don't break Jumbo frames.
    > I'd even try to connect initiator to the target with the cross-over
    > cable to isolate the issue.


    I have already set the target and the initiator to use jumbo frames -
    this made no diferense. I have removed all the network equipment
    between the initiator and target and I am currentely running with a
    cross over cable. This did not increase the performance.

    > Then get fast PC, install software iSCSI target wire speed capable
    > (DataCore SANmelody or RDS StarWind) and use initiator to map it's
    > storage. If you get wire speed with I/O meter (32KB+ blocks) then your
    > both network hardware/topology and initiator/client PC are OK and you
    > can call IBM support asking "What the **** is going on?!?"


    I will try this tomorrow.

    Soren


  9. Re: iSCSI performance problem

    On 14 Jun., 01:29, "John Fullbright"
    wrote:

    > Which model IBM SAN?


    It is a IBM Totalstorage DS300 with a dual controller redundant
    configuration. Equipped with 14 x 300 GB 10k rpm disks.

    It is IBM's entry level iSCSI SAN - but still I should be able to get
    better performance from it.

    Soren


  10. Re: iSCSI performance problem

    I don't think the bottleneck is the channel, it's the disk. How did you
    configure the disk?

    A 10 FC SCSI disk can do about 135 IOPS @ a 20ms response time. At 60ms
    it's about 170 IOPS. If you pull out all the stops and don't care about
    response time, it's around 200 IOPS.

    After considering these raw per spindle numbers, you have to take into
    account the write penalty of the RAID type. If P is the performance of a
    single spindle in IOPS, and N is the number of spindles in the array;

    RAID 5 write performance = P*(N-1)/4

    RAID 10 write performance = P*N/2

    If you took all 14 drives and put the in a RAID 5 array, and used the max
    IOPS/spindle, then

    write performance = 200 * (14-1)/4 or 650 IOPS. We'll assume 16K IOs.
    That's 16K * 650 or 10649600 bytes/sec ~ 8% of the available bandwidth on
    1Ge. As you can guess, with either RAID 10 or RAID 5, the read performance
    is much better.

    RAID 5 read performance = P*(N-1)
    RAID 10 read performance = P*N





    wrote in message
    news:1181809198.400675.6290@a.26g2000pre.googlegro ups.com...
    > On 14 Jun., 01:29, "John Fullbright"
    > wrote:
    >
    >> Which model IBM SAN?

    >
    > It is a IBM Totalstorage DS300 with a dual controller redundant
    > configuration. Equipped with 14 x 300 GB 10k rpm disks.
    >
    > It is IBM's entry level iSCSI SAN - but still I should be able to get
    > better performance from it.
    >
    > Soren
    >




  11. Re: iSCSI performance problem

    What did you "export" as iSCSI volume? Can you run target internal
    benchmark to find out local access disk performance?

    -anton

    zakk...@gmail.com wrote:
    > Hello Anton
    >
    > Thanks for your reply.
    >
    > > First of all try to tune network itself. Here are some recommended
    > > settings you need to apply on the initiator side:
    > > http://www.rocketdivision.com/forum/viewtopic.php?t=792

    >
    > I will try to create the mentioned regkeys. The article says to modyfy
    > the keys, but at my fileservers the keys dont exist, so I guess I will
    > just create them:
    >
    > 1) GlobalMaxTcpWindowSize = 0x01400000 (DWORD)
    > 2) TcpWindowSize = 0x01400000 (DWORD)
    > 3) Tcp1323Opts = 3 (DWORD)
    > 4) SackOpts = 1 (DWORD)
    >
    > > Make sure your both target and initator are configured to have the
    > > same Jumbo frame size assigned (9K works the best from what we've
    > > tried) and intermediate hardware (routers) don't break Jumbo frames.
    > > I'd even try to connect initiator to the target with the cross-over
    > > cable to isolate the issue.

    >
    > I have already set the target and the initiator to use jumbo frames -
    > this made no diferense. I have removed all the network equipment
    > between the initiator and target and I am currentely running with a
    > cross over cable. This did not increase the performance.
    >
    > > Then get fast PC, install software iSCSI target wire speed capable
    > > (DataCore SANmelody or RDS StarWind) and use initiator to map it's
    > > storage. If you get wire speed with I/O meter (32KB+ blocks) then your
    > > both network hardware/topology and initiator/client PC are OK and you
    > > can call IBM support asking "What the **** is going on?!?"

    >
    > I will try this tomorrow.
    >
    > Soren



  12. Re: iSCSI performance problem

    On 14 Jun., 19:41, "John Fullbright"
    wrote:

    Hello John, tahnsk for your reply.

    > I don't think the bottleneck is the channel, it's the disk. How did you
    > configure the disk?


    > If you took all 14 drives and put the in a RAID 5 array, and used the max
    > IOPS/spindle, then
    >
    > writeperformance= 200 * (14-1)/4 or 650 IOPS. We'll assume 16K IOs.
    > That's 16K * 650 or 10649600 bytes/sec ~ 8% of the available bandwidth on
    > 1Ge.


    I find this hard to believe... In comparrison, on my single harddrive
    workstation I get about 40mb/sec write performance.
    I use 8 disks in a raid5 on the iSCSI SAN and I only get about 10mb/
    sec write performance.

    If I cant get more throughput from a 8 disk array than this, i must
    say that I am pretty amazed.

    Kind regards

    Soren


  13. Re: iSCSI performance problem

    On a single 10K RPM drive, write performance (not caring about response time
    for a 10K spindle is ~ 200 IOPS. The differences you see are more likely
    due to the Windows filesystem cache on the host. Another difference could
    be the allocation unit size. If NTFS reveives an IO request larger than the
    allocation unit size for the volume, it splits the request into multiple
    IOs.



    wrote in message
    news:1182174607.686260.64220@q75g2000hsh.googlegro ups.com...
    > On 14 Jun., 19:41, "John Fullbright"
    > wrote:
    >
    > Hello John, tahnsk for your reply.
    >
    >> I don't think the bottleneck is the channel, it's the disk. How did you
    >> configure the disk?

    >
    >> If you took all 14 drives and put the in a RAID 5 array, and used the max
    >> IOPS/spindle, then
    >>
    >> writeperformance= 200 * (14-1)/4 or 650 IOPS. We'll assume 16K IOs.
    >> That's 16K * 650 or 10649600 bytes/sec ~ 8% of the available bandwidth on
    >> 1Ge.

    >
    > I find this hard to believe... In comparrison, on my single harddrive
    > workstation I get about 40mb/sec write performance.
    > I use 8 disks in a raid5 on the iSCSI SAN and I only get about 10mb/
    > sec write performance.
    >
    > If I cant get more throughput from a 8 disk array than this, i must
    > say that I am pretty amazed.
    >
    > Kind regards
    >
    > Soren
    >




  14. Re: iSCSI performance problem

    John, I think those numbers you're quoting are for 100% random 512b or 1k
    workloads. With any type of sequential read/writes mixed in, that number
    can go up especially when taking into account disk cache.

    10MB/s is extremely slow regardless of the type/number of drives installed.
    I.E as Zakkuto stated, a 5400 RPM IDE or SATA laptop drive can run 40-50MB/s
    sequential read/writes.

    In any event, 10MB is a peculiar number since that could equate to using
    100Mb/s Ethernet instead of 1Gb or a misconfiguration on the switch/target.

    Check your host/switch/target for port speed and ensure you have 1Gb/s else
    you will only get 10MB/s max.

    ~kenny

    "John Fullbright" wrote in message
    news:%23mQlrucsHHA.1188@TK2MSFTNGP04.phx.gbl...
    > On a single 10K RPM drive, write performance (not caring about response
    > time for a 10K spindle is ~ 200 IOPS. The differences you see are more
    > likely due to the Windows filesystem cache on the host. Another
    > difference could be the allocation unit size. If NTFS reveives an IO
    > request larger than the allocation unit size for the volume, it splits the
    > request into multiple IOs.
    >
    >
    >
    > wrote in message
    > news:1182174607.686260.64220@q75g2000hsh.googlegro ups.com...
    >> On 14 Jun., 19:41, "John Fullbright"
    >> wrote:
    >>
    >> Hello John, tahnsk for your reply.
    >>
    >>> I don't think the bottleneck is the channel, it's the disk. How did you
    >>> configure the disk?

    >>
    >>> If you took all 14 drives and put the in a RAID 5 array, and used the
    >>> max
    >>> IOPS/spindle, then
    >>>
    >>> writeperformance= 200 * (14-1)/4 or 650 IOPS. We'll assume 16K IOs.
    >>> That's 16K * 650 or 10649600 bytes/sec ~ 8% of the available bandwidth
    >>> on
    >>> 1Ge.

    >>
    >> I find this hard to believe... In comparrison, on my single harddrive
    >> workstation I get about 40mb/sec write performance.
    >> I use 8 disks in a raid5 on the iSCSI SAN and I only get about 10mb/
    >> sec write performance.
    >>
    >> If I cant get more throughput from a 8 disk array than this, i must
    >> say that I am pretty amazed.
    >>
    >> Kind regards
    >>
    >> Soren
    >>

    >
    >



  15. Re: iSCSI performance problem

    Do you use hardware RAID controller to build RAID5? Make sure it
    has "write back" cache ENABLED or you'll have dull performance and
    will kill the disks with the constant spin-up/down & seeks.

    -anton

    zakk...@gmail.com wrote:
    > On 14 Jun., 19:41, "John Fullbright"
    > wrote:
    >
    > Hello John, tahnsk for your reply.
    >
    > > I don't think the bottleneck is the channel, it's the disk. How did you
    > > configure the disk?

    >
    > > If you took all 14 drives and put the in a RAID 5 array, and used the max
    > > IOPS/spindle, then
    > >
    > > writeperformance= 200 * (14-1)/4 or 650 IOPS. We'll assume 16K IOs.
    > > That's 16K * 650 or 10649600 bytes/sec ~ 8% of the available bandwidth on
    > > 1Ge.

    >
    > I find this hard to believe... In comparrison, on my single harddrive
    > workstation I get about 40mb/sec write performance.
    > I use 8 disks in a raid5 on the iSCSI SAN and I only get about 10mb/
    > sec write performance.
    >
    > If I cant get more throughput from a 8 disk array than this, i must
    > say that I am pretty amazed.
    >
    > Kind regards
    >
    > Soren



+ Reply to Thread