Disk Queuing and multiple logical drives - Storage

This is a discussion on Disk Queuing and multiple logical drives - Storage ; We have found that our 1 terabyte SQL Server DB hits an OS level IO bottleneck reading and writing to our SAN. Our storage guru explained that creating more logical drives will greatly increase our IO throughput, as windows is ...

+ Reply to Thread
Results 1 to 6 of 6

Thread: Disk Queuing and multiple logical drives

  1. Disk Queuing and multiple logical drives

    We have found that our 1 terabyte SQL Server DB hits an OS level IO
    bottleneck reading and writing to our SAN. Our storage guru explained that
    creating more logical drives will greatly increase our IO throughput, as
    windows is limited to how much data it will write to a single logical device
    at one time. Several SQL server folks have suggested that the number of
    logical drives is irrelevant, and only the number of physical drives matter.
    Below is the explanation our guru gave, but I wanted to see if the
    microsoft-centric folks have anything different to say. Is there perhaps
    something that we should be doing differently rather than creating multiple
    logical drives in place of one big one?

    Any advice is apreciated.

    ************************************************** ***************************

    Whenever you present disk space from a SAN to any server, Wintel or Unix,
    you have the choice of presenting:
    - One large space that you carve up into multiple logical drives or
    Unix filesystems
    - Multiple smaller spaces each dedicated to one Wintel drive or Unix
    filesystem

    In the first case you only present one physical device to the O/S, multiple
    devices are presented in the second case.

    Both Wintel and Unix use a queuing mechanism for I/O to physical disk
    devices, there's one que typically per physical device and a certain amount
    of disk buffer space allocated to all physical devices.

    No matter how many disk spindles are presented from the SAN, if you're setup
    using the first case, you're queuing all disk I/O to all logical drives
    through one O/S level disk queue. That typically backs you up.

    In the second case, you've opened up multiple physical paths and thus
    multiple disk queues so you can handle more traffic.

    Best analogy is emptying out a full stadium onto a very wide sidewalk.
    Empty it out through one door and it has to be slower than using 5 or 10
    doors, assuming the outside sidewalk can handle the traffic from all the
    doors you open up simultaneously.

    You need to understand your DBMS also to optimize everything and also how
    the O/S allows you to enlarge disks/volumes.

    For example, in the Unix Epic implementation, they have 8 database writers
    and found that 8 disks, or filesystems, for the data was the sweet spot.
    More, and you didn't get any benefit, fewer and you bottlenecked the I/O.

    In Wintel, if you carve up a physical drive into more than one logical
    drive, you can only extend the last drive if you ever add space to the
    physical drive presented from the SAN unless you want to get into using
    Wintel Dynamic Disks. According to quite a few sysadmins and vendors I've
    talked to, they have much poorer performance than regular disks.

    If physicaldrive1 has |F:|G:|H:| carved up on it, and you run out of space
    and extend it on the SAN, you'll have |F:|G:|H:|Free Space|.

    You can only then extend H: without going to dynamic disks.

    That doesn't help if G: has filled up and needs space, because you'd have to
    convert G: into a dynamic disk and take a performance hit.





  2. Re: Disk Queuing and multiple logical drives


    "Jim Underwood" wrote in message
    news:ew44YgdjIHA.5280@TK2MSFTNGP02.phx.gbl...
    > We have found that our 1 terabyte SQL Server DB hits an OS level IO
    > bottleneck reading and writing to our SAN. Our storage guru explained
    > that creating more logical drives will greatly increase our IO throughput,
    > as windows is limited to how much data it will write to a single logical
    > device at one time. Several SQL server folks have suggested that the
    > number of logical drives is irrelevant, and only the number of physical
    > drives matter. Below is the explanation our guru gave, but I wanted to see
    > if the microsoft-centric folks have anything different to say. Is there
    > perhaps something that we should be doing differently rather than creating
    > multiple logical drives in place of one big one?
    >
    > Any advice is apreciated.


    I would suggest using DataMover's performance tool to test this hypothesis.
    Evaluation licenses are available upon request.

    Moojit

    www.moojit.net
    >
    > ************************************************** ***************************
    >
    > Whenever you present disk space from a SAN to any server, Wintel or Unix,
    > you have the choice of presenting:
    > - One large space that you carve up into multiple logical drives or
    > Unix filesystems
    > - Multiple smaller spaces each dedicated to one Wintel drive or
    > Unix filesystem
    >
    > In the first case you only present one physical device to the O/S,
    > multiple devices are presented in the second case.
    >
    > Both Wintel and Unix use a queuing mechanism for I/O to physical disk
    > devices, there's one que typically per physical device and a certain
    > amount of disk buffer space allocated to all physical devices.
    >
    > No matter how many disk spindles are presented from the SAN, if you're
    > setup using the first case, you're queuing all disk I/O to all logical
    > drives through one O/S level disk queue. That typically backs you up.
    >
    > In the second case, you've opened up multiple physical paths and thus
    > multiple disk queues so you can handle more traffic.
    >
    > Best analogy is emptying out a full stadium onto a very wide sidewalk.
    > Empty it out through one door and it has to be slower than using 5 or 10
    > doors, assuming the outside sidewalk can handle the traffic from all the
    > doors you open up simultaneously.
    >
    > You need to understand your DBMS also to optimize everything and also how
    > the O/S allows you to enlarge disks/volumes.
    >
    > For example, in the Unix Epic implementation, they have 8 database writers
    > and found that 8 disks, or filesystems, for the data was the sweet spot.
    > More, and you didn't get any benefit, fewer and you bottlenecked the I/O.
    >
    > In Wintel, if you carve up a physical drive into more than one logical
    > drive, you can only extend the last drive if you ever add space to the
    > physical drive presented from the SAN unless you want to get into using
    > Wintel Dynamic Disks. According to quite a few sysadmins and vendors I've
    > talked to, they have much poorer performance than regular disks.
    >
    > If physicaldrive1 has |F:|G:|H:| carved up on it, and you run out of
    > space and extend it on the SAN, you'll have |F:|G:|H:|Free Space|.
    >
    > You can only then extend H: without going to dynamic disks.
    >
    > That doesn't help if G: has filled up and needs space, because you'd have
    > to convert G: into a dynamic disk and take a performance hit.
    >
    >
    >
    >




  3. Re: Disk Queuing and multiple logical drives

    If we assume you're using a Storport driver for your HBA, then

    1. You have a Queue at the HBA level
    2. Storport implements LUN queues for each LUN.
    3. You have a disk queue for each logical driver at disk.sys.

    I have never seen the logical disk queue be the source of an IO bottleneck.
    I have seen the logical disk queues indicate an issue further down the
    storage stack. I have seen the HBA queue and the Storport LUN queue cause
    issues. If the HBA queue is two high (the sum of HBA queue depths for HBAs
    attached to a given port on a storage controller exceeds the queue depth of
    the storage controller port) then you could get a queue full sent back to
    the host HBA which can cause throttling and performance degredation. If
    it's set too low, then the HBA reports a queuefull back to storport. See
    the HBA vendors guidance for setting an appropriate queue depth. For
    Qlogic, it's the execution throttle setting.

    The miniport driver sets the Storport LUN queue depth via a registry
    setting. Situations with this set to high or too low can cause the HBA to
    send a queue full back to storport. For Qlogic, this is set to 32 decimal
    by the minport. I have seen instances where, when the execution throttle is
    set to 128 and you only have a couple of LUNs, increasing this can cause
    substantial gains in performance.

    https://now.netapp.com/Knowledgebase...asp?id=kb26454
    http://download.qlogic.com/drivers/5...ver_parameters

    To determine if there is an issue, and if your storage controller can report
    latency statistics, look for a large discrepency between the latency that
    the perfmon physical disk counters report and the latency that the storage
    controller reports. Typically, an unexplained gap in latency means that IO
    is queuing at the host beneath the disk class driver (storport or the
    miniport driver). If the latency numbers as measured by the perfom
    physical disk counters and measured at the storage controller ar similar,
    and the latency is high, this would indicate that you don't have enough
    spindles to support the workload.




    "Jim Underwood" wrote in message
    news:ew44YgdjIHA.5280@TK2MSFTNGP02.phx.gbl...
    > We have found that our 1 terabyte SQL Server DB hits an OS level IO
    > bottleneck reading and writing to our SAN. Our storage guru explained
    > that creating more logical drives will greatly increase our IO throughput,
    > as windows is limited to how much data it will write to a single logical
    > device at one time. Several SQL server folks have suggested that the
    > number of logical drives is irrelevant, and only the number of physical
    > drives matter. Below is the explanation our guru gave, but I wanted to see
    > if the microsoft-centric folks have anything different to say. Is there
    > perhaps something that we should be doing differently rather than creating
    > multiple logical drives in place of one big one?
    >
    > Any advice is apreciated.
    >
    > ************************************************** ***************************
    >
    > Whenever you present disk space from a SAN to any server, Wintel or Unix,
    > you have the choice of presenting:
    > - One large space that you carve up into multiple logical drives or
    > Unix filesystems
    > - Multiple smaller spaces each dedicated to one Wintel drive or
    > Unix filesystem
    >
    > In the first case you only present one physical device to the O/S,
    > multiple devices are presented in the second case.
    >
    > Both Wintel and Unix use a queuing mechanism for I/O to physical disk
    > devices, there's one que typically per physical device and a certain
    > amount of disk buffer space allocated to all physical devices.
    >
    > No matter how many disk spindles are presented from the SAN, if you're
    > setup using the first case, you're queuing all disk I/O to all logical
    > drives through one O/S level disk queue. That typically backs you up.
    >
    > In the second case, you've opened up multiple physical paths and thus
    > multiple disk queues so you can handle more traffic.
    >
    > Best analogy is emptying out a full stadium onto a very wide sidewalk.
    > Empty it out through one door and it has to be slower than using 5 or 10
    > doors, assuming the outside sidewalk can handle the traffic from all the
    > doors you open up simultaneously.
    >
    > You need to understand your DBMS also to optimize everything and also how
    > the O/S allows you to enlarge disks/volumes.
    >
    > For example, in the Unix Epic implementation, they have 8 database writers
    > and found that 8 disks, or filesystems, for the data was the sweet spot.
    > More, and you didn't get any benefit, fewer and you bottlenecked the I/O.
    >
    > In Wintel, if you carve up a physical drive into more than one logical
    > drive, you can only extend the last drive if you ever add space to the
    > physical drive presented from the SAN unless you want to get into using
    > Wintel Dynamic Disks. According to quite a few sysadmins and vendors I've
    > talked to, they have much poorer performance than regular disks.
    >
    > If physicaldrive1 has |F:|G:|H:| carved up on it, and you run out of
    > space and extend it on the SAN, you'll have |F:|G:|H:|Free Space|.
    >
    > You can only then extend H: without going to dynamic disks.
    >
    > That doesn't help if G: has filled up and needs space, because you'd have
    > to convert G: into a dynamic disk and take a performance hit.
    >
    >
    >
    >




  4. Re: Disk Queuing and multiple logical drives

    Thank you. Honestly, most of this is over my head, but I'll take it to my
    storage guru and hopefully get a better understanding of what is going on.

    "John Fullbright" wrote in message
    news:uE3CueqjIHA.484@TK2MSFTNGP06.phx.gbl...
    > If we assume you're using a Storport driver for your HBA, then
    >
    > 1. You have a Queue at the HBA level
    > 2. Storport implements LUN queues for each LUN.
    > 3. You have a disk queue for each logical driver at disk.sys.
    >
    > I have never seen the logical disk queue be the source of an IO
    > bottleneck. I have seen the logical disk queues indicate an issue further
    > down the storage stack. I have seen the HBA queue and the Storport LUN
    > queue cause issues. If the HBA queue is two high (the sum of HBA queue
    > depths for HBAs attached to a given port on a storage controller exceeds
    > the queue depth of the storage controller port) then you could get a queue
    > full sent back to the host HBA which can cause throttling and performance
    > degredation. If it's set too low, then the HBA reports a queuefull back
    > to storport. See the HBA vendors guidance for setting an appropriate
    > queue depth. For Qlogic, it's the execution throttle setting.
    >
    > The miniport driver sets the Storport LUN queue depth via a registry
    > setting. Situations with this set to high or too low can cause the HBA
    > to send a queue full back to storport. For Qlogic, this is set to 32
    > decimal by the minport. I have seen instances where, when the execution
    > throttle is set to 128 and you only have a couple of LUNs, increasing this
    > can cause substantial gains in performance.
    >
    > https://now.netapp.com/Knowledgebase...asp?id=kb26454
    > http://download.qlogic.com/drivers/5...ver_parameters
    >
    > To determine if there is an issue, and if your storage controller can
    > report latency statistics, look for a large discrepency between the
    > latency that the perfmon physical disk counters report and the latency
    > that the storage controller reports. Typically, an unexplained gap in
    > latency means that IO is queuing at the host beneath the disk class driver
    > (storport or the miniport driver). If the latency numbers as measured by
    > the perfom physical disk counters and measured at the storage controller
    > ar similar, and the latency is high, this would indicate that you don't
    > have enough spindles to support the workload.
    >
    >
    >
    >
    > "Jim Underwood" wrote in message
    > news:ew44YgdjIHA.5280@TK2MSFTNGP02.phx.gbl...
    >> We have found that our 1 terabyte SQL Server DB hits an OS level IO
    >> bottleneck reading and writing to our SAN. Our storage guru explained
    >> that creating more logical drives will greatly increase our IO
    >> throughput, as windows is limited to how much data it will write to a
    >> single logical device at one time. Several SQL server folks have
    >> suggested that the number of logical drives is irrelevant, and only the
    >> number of physical drives matter. Below is the explanation our guru gave,
    >> but I wanted to see if the microsoft-centric folks have anything
    >> different to say. Is there perhaps something that we should be doing
    >> differently rather than creating multiple logical drives in place of one
    >> big one?
    >>
    >> Any advice is apreciated.
    >>
    >> ************************************************** ***************************
    >>
    >> Whenever you present disk space from a SAN to any server, Wintel or Unix,
    >> you have the choice of presenting:
    >> - One large space that you carve up into multiple logical drives
    >> or Unix filesystems
    >> - Multiple smaller spaces each dedicated to one Wintel drive or
    >> Unix filesystem
    >>
    >> In the first case you only present one physical device to the O/S,
    >> multiple devices are presented in the second case.
    >>
    >> Both Wintel and Unix use a queuing mechanism for I/O to physical disk
    >> devices, there's one que typically per physical device and a certain
    >> amount of disk buffer space allocated to all physical devices.
    >>
    >> No matter how many disk spindles are presented from the SAN, if you're
    >> setup using the first case, you're queuing all disk I/O to all logical
    >> drives through one O/S level disk queue. That typically backs you up.
    >>
    >> In the second case, you've opened up multiple physical paths and thus
    >> multiple disk queues so you can handle more traffic.
    >>
    >> Best analogy is emptying out a full stadium onto a very wide sidewalk.
    >> Empty it out through one door and it has to be slower than using 5 or 10
    >> doors, assuming the outside sidewalk can handle the traffic from all the
    >> doors you open up simultaneously.
    >>
    >> You need to understand your DBMS also to optimize everything and also how
    >> the O/S allows you to enlarge disks/volumes.
    >>
    >> For example, in the Unix Epic implementation, they have 8 database
    >> writers and found that 8 disks, or filesystems, for the data was the
    >> sweet spot. More, and you didn't get any benefit, fewer and you
    >> bottlenecked the I/O.
    >>
    >> In Wintel, if you carve up a physical drive into more than one logical
    >> drive, you can only extend the last drive if you ever add space to the
    >> physical drive presented from the SAN unless you want to get into using
    >> Wintel Dynamic Disks. According to quite a few sysadmins and vendors I've
    >> talked to, they have much poorer performance than regular disks.
    >>
    >> If physicaldrive1 has |F:|G:|H:| carved up on it, and you run out of
    >> space and extend it on the SAN, you'll have |F:|G:|H:|Free Space|.
    >>
    >> You can only then extend H: without going to dynamic disks.
    >>
    >> That doesn't help if G: has filled up and needs space, because you'd have
    >> to convert G: into a dynamic disk and take a performance hit.
    >>
    >>
    >>
    >>

    >
    >




  5. Re: Disk Queuing and multiple logical drives


    "Jim Underwood" wrote in message
    news:ecm%237z0jIHA.424@TK2MSFTNGP06.phx.gbl...
    > Thank you. Honestly, most of this is over my head, but I'll take it to my
    > storage guru and hopefully get a better understanding of what is going on.


    The best thing to do in this situation is perform some experiments (if
    possible) to determine what works best. It's true that miniport vendors
    implement their own queuing algorithms on top of what
    Microsoft may be doing internally. Same applies to Linux

    Da Mooj


    >
    > "John Fullbright" wrote in message
    > news:uE3CueqjIHA.484@TK2MSFTNGP06.phx.gbl...
    >> If we assume you're using a Storport driver for your HBA, then
    >>
    >> 1. You have a Queue at the HBA level
    >> 2. Storport implements LUN queues for each LUN.
    >> 3. You have a disk queue for each logical driver at disk.sys.
    >>
    >> I have never seen the logical disk queue be the source of an IO
    >> bottleneck. I have seen the logical disk queues indicate an issue further
    >> down the storage stack. I have seen the HBA queue and the Storport LUN
    >> queue cause issues. If the HBA queue is two high (the sum of HBA queue
    >> depths for HBAs attached to a given port on a storage controller exceeds
    >> the queue depth of the storage controller port) then you could get a
    >> queue full sent back to the host HBA which can cause throttling and
    >> performance degredation. If it's set too low, then the HBA reports a
    >> queuefull back to storport. See the HBA vendors guidance for setting an
    >> appropriate queue depth. For Qlogic, it's the execution throttle
    >> setting.
    >>
    >> The miniport driver sets the Storport LUN queue depth via a registry
    >> setting. Situations with this set to high or too low can cause the HBA
    >> to send a queue full back to storport. For Qlogic, this is set to 32
    >> decimal by the minport. I have seen instances where, when the execution
    >> throttle is set to 128 and you only have a couple of LUNs, increasing
    >> this can cause substantial gains in performance.
    >>
    >> https://now.netapp.com/Knowledgebase...asp?id=kb26454
    >> http://download.qlogic.com/drivers/5...ver_parameters
    >>
    >> To determine if there is an issue, and if your storage controller can
    >> report latency statistics, look for a large discrepency between the
    >> latency that the perfmon physical disk counters report and the latency
    >> that the storage controller reports. Typically, an unexplained gap in
    >> latency means that IO is queuing at the host beneath the disk class
    >> driver (storport or the miniport driver). If the latency numbers as
    >> measured by the perfom physical disk counters and measured at the storage
    >> controller ar similar, and the latency is high, this would indicate that
    >> you don't have enough spindles to support the workload.
    >>
    >>
    >>
    >>
    >> "Jim Underwood" wrote in
    >> message news:ew44YgdjIHA.5280@TK2MSFTNGP02.phx.gbl...
    >>> We have found that our 1 terabyte SQL Server DB hits an OS level IO
    >>> bottleneck reading and writing to our SAN. Our storage guru explained
    >>> that creating more logical drives will greatly increase our IO
    >>> throughput, as windows is limited to how much data it will write to a
    >>> single logical device at one time. Several SQL server folks have
    >>> suggested that the number of logical drives is irrelevant, and only the
    >>> number of physical drives matter. Below is the explanation our guru
    >>> gave, but I wanted to see if the microsoft-centric folks have anything
    >>> different to say. Is there perhaps something that we should be doing
    >>> differently rather than creating multiple logical drives in place of one
    >>> big one?
    >>>
    >>> Any advice is apreciated.
    >>>
    >>> ************************************************** ***************************
    >>>
    >>> Whenever you present disk space from a SAN to any server, Wintel or
    >>> Unix, you have the choice of presenting:
    >>> - One large space that you carve up into multiple logical drives
    >>> or Unix filesystems
    >>> - Multiple smaller spaces each dedicated to one Wintel drive or
    >>> Unix filesystem
    >>>
    >>> In the first case you only present one physical device to the O/S,
    >>> multiple devices are presented in the second case.
    >>>
    >>> Both Wintel and Unix use a queuing mechanism for I/O to physical disk
    >>> devices, there's one que typically per physical device and a certain
    >>> amount of disk buffer space allocated to all physical devices.
    >>>
    >>> No matter how many disk spindles are presented from the SAN, if you're
    >>> setup using the first case, you're queuing all disk I/O to all logical
    >>> drives through one O/S level disk queue. That typically backs you up.
    >>>
    >>> In the second case, you've opened up multiple physical paths and thus
    >>> multiple disk queues so you can handle more traffic.
    >>>
    >>> Best analogy is emptying out a full stadium onto a very wide sidewalk.
    >>> Empty it out through one door and it has to be slower than using 5 or 10
    >>> doors, assuming the outside sidewalk can handle the traffic from all the
    >>> doors you open up simultaneously.
    >>>
    >>> You need to understand your DBMS also to optimize everything and also
    >>> how the O/S allows you to enlarge disks/volumes.
    >>>
    >>> For example, in the Unix Epic implementation, they have 8 database
    >>> writers and found that 8 disks, or filesystems, for the data was the
    >>> sweet spot. More, and you didn't get any benefit, fewer and you
    >>> bottlenecked the I/O.
    >>>
    >>> In Wintel, if you carve up a physical drive into more than one logical
    >>> drive, you can only extend the last drive if you ever add space to the
    >>> physical drive presented from the SAN unless you want to get into using
    >>> Wintel Dynamic Disks. According to quite a few sysadmins and vendors
    >>> I've talked to, they have much poorer performance than regular disks.
    >>>
    >>> If physicaldrive1 has |F:|G:|H:| carved up on it, and you run out of
    >>> space and extend it on the SAN, you'll have |F:|G:|H:|Free Space|.
    >>>
    >>> You can only then extend H: without going to dynamic disks.
    >>>
    >>> That doesn't help if G: has filled up and needs space, because you'd
    >>> have to convert G: into a dynamic disk and take a performance hit.
    >>>
    >>>
    >>>
    >>>

    >>
    >>

    >
    >




  6. Re: Disk Queuing and multiple logical drives

    Measure at the disk class driver (physical disk counters in perfmon) and
    measure at the storage controller (for netapp it's the perfstat counters
    accessible through perfmon, other vendor will vary) compare the two to see
    if there is a discrepency.

    So why wouldn't you measure the logical disk and the physical disk and
    compare the two? You could. A discrepency here would most likely be due to
    a filter driver however.

    The two situations I describe deal with high latency as observed by perfmon
    and normal latency observed at the storage. If you see coorelating high
    latency on both, then you're spindle constrained.

    John


    "moojit" wrote in message
    news:47eba701$0$6138$4c368faf@roadrunner.com...
    >
    > "Jim Underwood" wrote in message
    > news:ecm%237z0jIHA.424@TK2MSFTNGP06.phx.gbl...
    >> Thank you. Honestly, most of this is over my head, but I'll take it to
    >> my storage guru and hopefully get a better understanding of what is going
    >> on.

    >
    > The best thing to do in this situation is perform some experiments (if
    > possible) to determine what works best. It's true that miniport vendors
    > implement their own queuing algorithms on top of what
    > Microsoft may be doing internally. Same applies to Linux
    >
    > Da Mooj
    >
    >
    >>
    >> "John Fullbright" wrote in message
    >> news:uE3CueqjIHA.484@TK2MSFTNGP06.phx.gbl...
    >>> If we assume you're using a Storport driver for your HBA, then
    >>>
    >>> 1. You have a Queue at the HBA level
    >>> 2. Storport implements LUN queues for each LUN.
    >>> 3. You have a disk queue for each logical driver at disk.sys.
    >>>
    >>> I have never seen the logical disk queue be the source of an IO
    >>> bottleneck. I have seen the logical disk queues indicate an issue
    >>> further down the storage stack. I have seen the HBA queue and the
    >>> Storport LUN queue cause issues. If the HBA queue is two high (the sum
    >>> of HBA queue depths for HBAs attached to a given port on a storage
    >>> controller exceeds the queue depth of the storage controller port) then
    >>> you could get a queue full sent back to the host HBA which can cause
    >>> throttling and performance degredation. If it's set too low, then the
    >>> HBA reports a queuefull back to storport. See the HBA vendors guidance
    >>> for setting an appropriate queue depth. For Qlogic, it's the execution
    >>> throttle setting.
    >>>
    >>> The miniport driver sets the Storport LUN queue depth via a registry
    >>> setting. Situations with this set to high or too low can cause the HBA
    >>> to send a queue full back to storport. For Qlogic, this is set to 32
    >>> decimal by the minport. I have seen instances where, when the execution
    >>> throttle is set to 128 and you only have a couple of LUNs, increasing
    >>> this can cause substantial gains in performance.
    >>>
    >>> https://now.netapp.com/Knowledgebase...asp?id=kb26454
    >>> http://download.qlogic.com/drivers/5...ver_parameters
    >>>
    >>> To determine if there is an issue, and if your storage controller can
    >>> report latency statistics, look for a large discrepency between the
    >>> latency that the perfmon physical disk counters report and the latency
    >>> that the storage controller reports. Typically, an unexplained gap in
    >>> latency means that IO is queuing at the host beneath the disk class
    >>> driver (storport or the miniport driver). If the latency numbers as
    >>> measured by the perfom physical disk counters and measured at the
    >>> storage controller ar similar, and the latency is high, this would
    >>> indicate that you don't have enough spindles to support the workload.
    >>>
    >>>
    >>>
    >>>
    >>> "Jim Underwood" wrote in
    >>> message news:ew44YgdjIHA.5280@TK2MSFTNGP02.phx.gbl...
    >>>> We have found that our 1 terabyte SQL Server DB hits an OS level IO
    >>>> bottleneck reading and writing to our SAN. Our storage guru explained
    >>>> that creating more logical drives will greatly increase our IO
    >>>> throughput, as windows is limited to how much data it will write to a
    >>>> single logical device at one time. Several SQL server folks have
    >>>> suggested that the number of logical drives is irrelevant, and only the
    >>>> number of physical drives matter. Below is the explanation our guru
    >>>> gave, but I wanted to see if the microsoft-centric folks have anything
    >>>> different to say. Is there perhaps something that we should be doing
    >>>> differently rather than creating multiple logical drives in place of
    >>>> one big one?
    >>>>
    >>>> Any advice is apreciated.
    >>>>
    >>>> ************************************************** ***************************
    >>>>
    >>>> Whenever you present disk space from a SAN to any server, Wintel or
    >>>> Unix, you have the choice of presenting:
    >>>> - One large space that you carve up into multiple logical drives
    >>>> or Unix filesystems
    >>>> - Multiple smaller spaces each dedicated to one Wintel drive or
    >>>> Unix filesystem
    >>>>
    >>>> In the first case you only present one physical device to the O/S,
    >>>> multiple devices are presented in the second case.
    >>>>
    >>>> Both Wintel and Unix use a queuing mechanism for I/O to physical disk
    >>>> devices, there's one que typically per physical device and a certain
    >>>> amount of disk buffer space allocated to all physical devices.
    >>>>
    >>>> No matter how many disk spindles are presented from the SAN, if you're
    >>>> setup using the first case, you're queuing all disk I/O to all logical
    >>>> drives through one O/S level disk queue. That typically backs you up.
    >>>>
    >>>> In the second case, you've opened up multiple physical paths and thus
    >>>> multiple disk queues so you can handle more traffic.
    >>>>
    >>>> Best analogy is emptying out a full stadium onto a very wide sidewalk.
    >>>> Empty it out through one door and it has to be slower than using 5 or
    >>>> 10 doors, assuming the outside sidewalk can handle the traffic from all
    >>>> the doors you open up simultaneously.
    >>>>
    >>>> You need to understand your DBMS also to optimize everything and also
    >>>> how the O/S allows you to enlarge disks/volumes.
    >>>>
    >>>> For example, in the Unix Epic implementation, they have 8 database
    >>>> writers and found that 8 disks, or filesystems, for the data was the
    >>>> sweet spot. More, and you didn't get any benefit, fewer and you
    >>>> bottlenecked the I/O.
    >>>>
    >>>> In Wintel, if you carve up a physical drive into more than one logical
    >>>> drive, you can only extend the last drive if you ever add space to the
    >>>> physical drive presented from the SAN unless you want to get into using
    >>>> Wintel Dynamic Disks. According to quite a few sysadmins and vendors
    >>>> I've talked to, they have much poorer performance than regular disks.
    >>>>
    >>>> If physicaldrive1 has |F:|G:|H:| carved up on it, and you run out of
    >>>> space and extend it on the SAN, you'll have |F:|G:|H:|Free Space|.
    >>>>
    >>>> You can only then extend H: without going to dynamic disks.
    >>>>
    >>>> That doesn't help if G: has filled up and needs space, because you'd
    >>>> have to convert G: into a dynamic disk and take a performance hit.
    >>>>
    >>>>
    >>>>
    >>>>
    >>>
    >>>

    >>
    >>

    >
    >




+ Reply to Thread