Raid: software or hardware - Hardware

This is a discussion on Raid: software or hardware - Hardware ; Hello I have mainboard: gigabyte P965S3. I have 2 SATA interfaces using builtin Jmicron controller (and another 4 SATA interfaces) I upgraded bios to version F12. I want to make mirror raid. What's better to use: software or hardware(jmicron)? 1. ...

+ Reply to Thread
Results 1 to 10 of 10

Thread: Raid: software or hardware

  1. Raid: software or hardware

    Hello

    I have mainboard: gigabyte P965S3.
    I have 2 SATA interfaces using builtin Jmicron controller
    (and another 4 SATA interfaces)
    I upgraded bios to version F12.
    I want to make mirror raid.

    What's better to use: software or hardware(jmicron)?

    1. Using hardware, jmicron (creating raid from bios).
    Will gentoo see it ? Is it stable ?
    If one disk fails how will i know ?
    Will there be no problem with replacing bad disk with new one ?

    2. Software raid (using mdadm)

    What's better ? Has anybody used this jmicron raid ?

    thanx

  2. Re: Raid: software or hardware

    On Wednesday 11 June 2008 07:49, *vertigo* wrote
    in /comp.os.linux.hardware:/

    > Hello
    >
    > I have mainboard: gigabyte P965S3.
    > I have 2 SATA interfaces using builtin Jmicron controller
    > (and another 4 SATA interfaces)
    > I upgraded bios to version F12.
    > I want to make mirror raid.
    >
    > What's better to use: software or hardware(jmicron)?


    Hardware RAID is always preferable over software RAID *if* it is available.
    However, I sincerely doubt that the onboard SATA RAID of your motherboard
    will be a hardware RAID.

    Most likely, it'll be a hardware-assisted software RAID, or if you will, a
    "WinRAID", just like a Winmodem isn't a real modem.

    > 1. Using hardware, jmicron (creating raid from bios).
    > Will gentoo see it ? Is it stable ?


    I do not know the JMicron, but true hardware SATA RAID from a chip on the
    motherboard is extremely rare and would also make the board more expensive
    than its peers.

    Even if you enable the RAID functionality of an SATA onboard
    hardware-assisted software RAID, this will only be seen as a RAID by
    Windows, and then will require you to load the appropriate driver for it,
    if Windows doesn't have a driver for it by itself.

    In GNU/Linux, such a device is seen as a simple SATA controller, regardless
    of whether the RAID functionality is enabled or not, because basically
    that's what it is.

    > If one disk fails how will i know ?


    Both true hardware RAID and software RAID would report this via /syslogd,/
    or possibly via an additional daemon like Adaptec's Storage Manager for
    Adaptec RAID adapters.

    > Will there be no problem with replacing bad disk with new one ?


    The replacement of a failed drive in a RAID array is always subject to
    certain delays and restrictions. Not all RAID adapters (or hard disks!)
    support hotplugging.

    When a drive has been replaced, most hardware RAID adapters will
    automatically rebuild the array - which takes some time and will slow down
    the machine's performance somewhat - but in software RAID, I believe you
    have to rebuild the array manually using /mdadm./

    --
    *Aragorn*
    (registered GNU/Linux user #223157)

  3. Re: Raid: software or hardware


    >
    >> Hello
    >>
    >> I have mainboard: gigabyte P965S3.
    >> I have 2 SATA interfaces using builtin Jmicron controller
    >> (and another 4 SATA interfaces)
    >> I upgraded bios to version F12.
    >> I want to make mirror raid.
    >>
    >> What's better to use: software or hardware(jmicron)?

    >
    > Hardware RAID is always preferable over software RAID *if* it is
    > available.
    > However, I sincerely doubt that the onboard SATA RAID of your motherboard
    > will be a hardware RAID.
    >
    > Most likely, it'll be a hardware-assisted software RAID, or if you will,
    > a
    > "WinRAID", just like a Winmodem isn't a real modem.
    >
    >> 1. Using hardware, jmicron (creating raid from bios).
    >> Will gentoo see it ? Is it stable ?

    >
    > I do not know the JMicron, but true hardware SATA RAID from a chip on the
    > motherboard is extremely rare and would also make the board more
    > expensive
    > than its peers.
    >
    > Even if you enable the RAID functionality of an SATA onboard
    > hardware-assisted software RAID, this will only be seen as a RAID by
    > Windows, and then will require you to load the appropriate driver for it,
    > if Windows doesn't have a driver for it by itself.
    >
    > In GNU/Linux, such a device is seen as a simple SATA controller,
    > regardless
    > of whether the RAID functionality is enabled or not, because basically
    > that's what it is.
    >
    >> If one disk fails how will i know ?

    >
    > Both true hardware RAID and software RAID would report this via
    > /syslogd,/
    > or possibly via an additional daemon like Adaptec's Storage Manager for
    > Adaptec RAID adapters.
    >
    >> Will there be no problem with replacing bad disk with new one ?

    >
    > The replacement of a failed drive in a RAID array is always subject to
    > certain delays and restrictions. Not all RAID adapters (or hard disks!)
    > support hotplugging.
    >
    > When a drive has been replaced, most hardware RAID adapters will
    > automatically rebuild the array - which takes some time and will slow
    > down
    > the machine's performance somewhat - but in software RAID, I believe you
    > have to rebuild the array manually using /mdadm./
    >



    thanx for this info. That's true - jmicron is not real hardware RAID.
    Linux kernel has drivers for jmicron software/hardware RAID.

    So - what is better ?
    1. fully software RAID based on mdadm tool
    2. or software/hardware jmicron RAID ?

    What would you suggest ?
    What is faster ? What makes CPU works less ?
    Which solution is more reliable ?

    Thanx

  4. Re: Raid: software or hardware

    On Wednesday 11 June 2008 12:16, *vertigo* wrote
    in /comp.os.linux.hardware:/

    >>> [...]
    >>> 1. Using hardware, jmicron (creating raid from bios).
    >>> Will gentoo see it ? Is it stable ?

    >>
    >> I do not know the JMicron, but true hardware SATA RAID from a chip on the
    >> motherboard is extremely rare and would also make the board more
    >> expensive than its peers.
    >>
    >> Even if you enable the RAID functionality of an SATA onboard
    >> hardware-assisted software RAID, this will only be seen as a RAID by
    >> Windows, and then will require you to load the appropriate driver for it,
    >> if Windows doesn't have a driver for it by itself.
    >>
    >> In GNU/Linux, such a device is seen as a simple SATA controller,
    >> regardless of whether the RAID functionality is enabled or not, because
    >> basically that's what it is.
    >>
    >> [...]

    >
    > thanx for this info. That's true - jmicron is not real hardware RAID.
    > Linux kernel has drivers for jmicron software/hardware RAID.
    >
    > So - what is better ?
    > 1. fully software RAID based on mdadm tool
    > 2. or software/hardware jmicron RAID ?


    In this particular case, there wouldn't be much of a difference
    performancewise. However, if the Linux kernel has a driver for this chip,
    then it might be wise to use it, just in the event that using the chipset
    with the proper driver provides extra functionality - e.g. with regard to
    error or status reporting - over the default software RAID mechanisms in
    the rather "universal" Linux software RAID.

    > What would you suggest ?


    Try using the /jmicron/ driver and see what gives. If stability issues were
    to ensue from this driver, then you know you have a safe fallback solution
    using traditional Linux software RAID.

    > What is faster ? What makes CPU works less ?


    There would normally not be any difference in performance or CPU load
    between a complete software RAID and a hardware-assisted software RAID.
    They both use the mechanism of software RAID, but the "native" driver might
    add some functionality.

    > Which solution is more reliable ?


    Well, I have no experience with or reports on the /jmicron/ driver, but what
    I do know is that Linux software RAID is a timeproven and robust solution.

    If however the /jmicron/ driver is part of the stock Linux kernel tree, then
    it means that the driver is released under the GPL, and if so, it'll be
    quite reliable as well, because Linus would never sanction the inclusion of
    an unstable driver in his stable kernel tree.

    Just as an illustration (and for what it's worth), a survey conducted by an
    American university - I forgot which one - a few years ago showed that if
    one compares two equal amounts of proprietary and FOSS code - i.e. an equal
    number of lines of code - then the proprietary code contains about 300% to
    400% as many bugs as the comparable amount of FOSS code.

    So in other words, if the driver is included in the stock Linux kernel, then
    it should statistically be considered about 300 to 400 times more stable
    than a proprietary driver, even if that proprietary driver were released
    from the hardware manufacturers themselves - cfr. nVidia and the likes.

    Bottom line: If this /jmicron/ driver is indeed part of the stock kernel - I
    keep repeating this because in all honesty I don't know whether it is; I
    myself tend to favor SCSI solutions instead for RAID and I'm therefore less
    interested in the PATA/SATA RAID stuff - then the reliability of this
    driver /should/ be on par with the generic Linux software RAID mechanisms.

    Basically, the only difference between regular software RAID and
    the /jmicron/ driver would be the driver for the chipset. In the former
    case, you would be using a generic SATA driver, and in the latter a driver
    that appeals to the extra functionality in this chip.

    Hope this was helpful... ;-)

    --
    *Aragorn*
    (registered GNU/Linux user #223157)

  5. Re: Raid: software or hardware

    vertigo writes:
    >What's better to use: software or hardware(jmicron)?
    >
    >1. Using hardware, jmicron (creating raid from bios).
    >Will gentoo see it ? Is it stable ?
    >If one disk fails how will i know ?
    >Will there be no problem with replacing bad disk with new one ?
    >
    >2. Software raid (using mdadm)
    >
    >What's better ? Has anybody used this jmicron raid ?


    "Hardware" RAID has the following disadvantages:

    - When the controller dies, you to install the same hardware to get at
    your data. At that time, you will probably not be able to buy it
    anymore, so better buy it now. If you do RAID1, then you may be
    able to get to the data without additional hardware (but better try
    that first to be sure), but then you won't have a performance
    advantage over md even if the jmicron RAID is really hardware RAID.

    - "Hardware" RAID works on whole disks, md can be applied to
    partitions; so with md you can RAID1 your valuable data, RAID0 some
    scratch stuff, and have non-RAID swap partitions.

    In conclusion, I recommend using md unless you have really good
    reasons to use something else.

    BTW, I just wasted a day playing around with an ICH9R-based RAID; the
    idea was to use the ICH9R RAID1 on Windows and md on Linux, but it
    does not work that way: contrary to what some people have claimed,
    Intel's Matrix-RAID does not work on partitions, but allows to divide
    the disks into two (not more) parts in a different way; the idea is
    probably that you use RAID1 for one part and RAID0 for the other part.
    Anyway, I finally gave up on that idea, so Windows just has to do
    without RAID; it's not so important anyway:-).

    The aftereffects of my experiments with the RAID cost me another
    half-day (even after changing the mode to AHCI and writing over the
    start of the disk, the Windows driver still saw the RAID I had created
    during the experiments.

    One other thing worth noting: For md, make sure you create RAIDs with
    a version 0.90 superblock (mdadm option -e 0.90), because lilo (if you
    boot from RAID) and kernels as recent as 2.6.19 don't grok the newer
    superblock formats; even though the documentation says that 0.90 is
    the default, the Knoppix (5.3 IIRC) mdadm created 1.0 superblocks for
    me by default:-(.

    - anton
    --
    M. Anton Ertl Some things have to be seen to be believed
    anton@mips.complang.tuwien.ac.at Most things have to be believed to be seen
    http://www.complang.tuwien.ac.at/anton/home.html

  6. Re: Raid: software or hardware

    On Wed, 11 Jun 2008 11:33:03 +0200 Aragorn wrote:

    | Hardware RAID is always preferable over software RAID *if* it is available.
    | However, I sincerely doubt that the onboard SATA RAID of your motherboard
    | will be a hardware RAID.

    I have a Tyan S2927A2NRF with 6 SATA ports. I had 4 of them connected for
    a while to 4 WD 750GB SATA drives. I achieved 96MB/s transfer rate when
    accessing 1 drive at a time (linear reading/writing so the drive would be
    doing more data transfer than seeking for this test). When I ran all 4
    drives at the same time (4 separate processes doing reading), the speed
    of each dropped to about 92MB/s, so I was getting an aggregate speed of
    368MB/s. I don't know what caused the slowdown (controller, bus, driver,
    scheduler) but the machine has 2x dual-core 2.8GHz Opeterons, so there is
    at least enough CPU power there to work 4 processes easily. Nevertheless
    it's still a decent throughput.

    My question is, does your preference for hardware RAID still hold here?
    Presumably I could get a 368MB/s peak with software RAID minus whatever
    overhead that RAID would incur. Hardware RAID in the controller (it has
    it but I've not determined if Linux can configure and work with it, and
    it might be the "WinRAID" you mention) might do just as well. External
    hardware RAID might be a problem as the aggregate transfer rate (2.944
    Gbps) is pushing the limit of a SATA-II connection (hypothetically 3.0
    Gbps).


    | Most likely, it'll be a hardware-assisted software RAID, or if you will, a
    | "WinRAID", just like a Winmodem isn't a real modem.
    [*] El-cheapo devices that steal resources from the main CPU(s) so they can
    market lower pricing.


    |> 1. Using hardware, jmicron (creating raid from bios).
    |> Will gentoo see it ? Is it stable ?
    |
    | I do not know the JMicron, but true hardware SATA RAID from a chip on the
    | motherboard is extremely rare and would also make the board more expensive
    | than its peers.

    Seems to be common on server grade motherboards in the many-hundred-dollar
    ranges.


    | Even if you enable the RAID functionality of an SATA onboard
    | hardware-assisted software RAID, this will only be seen as a RAID by
    | Windows, and then will require you to load the appropriate driver for it,
    | if Windows doesn't have a driver for it by itself.
    |
    | In GNU/Linux, such a device is seen as a simple SATA controller, regardless
    | of whether the RAID functionality is enabled or not, because basically
    | that's what it is.

    I'm guessing if it is a "WinRAID" style, it has special queueing commands to
    at least improve up having N separate SATA ports doing as software RAID.


    |> If one disk fails how will i know ?
    |
    | Both true hardware RAID and software RAID would report this via /syslogd,/
    | or possibly via an additional daemon like Adaptec's Storage Manager for
    | Adaptec RAID adapters.
    |
    |> Will there be no problem with replacing bad disk with new one ?
    |
    | The replacement of a failed drive in a RAID array is always subject to
    | certain delays and restrictions. Not all RAID adapters (or hard disks!)
    | support hotplugging.

    It appears the ones in the Tyan S2927A2NRF board do not. I connected a
    couple ports to an eSATA adapter and tried plugging in an external SATA
    drive (A WD 1TB drive with 3 interfaces). It would not recognize the
    hotplugging automatically. If I had the drive plugged in while booting
    up, it would find the drive OK.


    | When a drive has been replaced, most hardware RAID adapters will
    | automatically rebuild the array - which takes some time and will slow down
    | the machine's performance somewhat - but in software RAID, I believe you
    | have to rebuild the array manually using /mdadm./

    Presumably some kind of RAID manager daemon could run in the background to
    emulate the hardware RAID by doing this chore when it sees the drive being
    added.

    For a project I am studying which will need a lot of reliable file space,
    I'm looking at using RAID only to expand the storage per server, and doing
    redundancy between servers, with load balancing between servers as well.
    Programs would run on the servers to compare each server tree to the others
    and replicate anything found missing on one or more servers.

    --
    |WARNING: Due to extreme spam, googlegroups.com is blocked. Due to ignorance |
    | by the abuse department, bellsouth.net is blocked. If you post to |
    | Usenet from these places, find another Usenet provider ASAP. |
    | Phil Howard KA9WGN (email for humans: first name in lower case at ipal.net) |

  7. Re: Raid: software or hardware

    On Saturday 14 June 2008 22:19, someone who identifies as
    *phil-news-nospam@ipal.net* wrote in /comp.os.linux.hardware:/

    > On Wed, 11 Jun 2008 11:33:03 +0200 Aragorn
    > wrote:
    >
    > | Hardware RAID is always preferable over software RAID *if* it is
    > | available. However, I sincerely doubt that the onboard SATA RAID of your
    > | motherboard will be a hardware RAID.
    >
    > I have a Tyan S2927A2NRF with 6 SATA ports. I had 4 of them connected for
    > a while to 4 WD 750GB SATA drives. I achieved 96MB/s transfer rate when
    > accessing 1 drive at a time (linear reading/writing so the drive would be
    > doing more data transfer than seeking for this test). When I ran all 4
    > drives at the same time (4 separate processes doing reading), the speed
    > of each dropped to about 92MB/s, so I was getting an aggregate speed of
    > 368MB/s. I don't know what caused the slowdown (controller, bus, driver,
    > scheduler) but the machine has 2x dual-core 2.8GHz Opeterons, so there is
    > at least enough CPU power there to work 4 processes easily. Nevertheless
    > it's still a decent throughput.


    So it is, and as long as your disks don't have a higher throughput than the
    bus can handle, you're okay. ;-)

    > My question is, does your preference for hardware RAID still hold here?


    Well, my preference for hardware RAID is not based upon performance alone,
    but also on other technicalities.

    > Presumably I could get a 368MB/s peak with software RAID minus whatever
    > overhead that RAID would incur. Hardware RAID in the controller (it has
    > it but I've not determined if Linux can configure and work with it, and
    > it might be the "WinRAID" you mention) might do just as well. External
    > hardware RAID might be a problem as the aggregate transfer rate (2.944
    > Gbps) is pushing the limit of a SATA-II connection (hypothetically 3.0
    > Gbps).


    I have no experience with real SATA hardware RAID, but I do have an Adaptec
    SAS hardware RAID controller in my Tyan machine, and it has a 3 Gb/sec (384
    MB/sec) bus speed - and this is important! - *per* attached storage device.

    In other words, the transfer rate of the connection between the controller
    and each individual disk is 384 MB/sec. The only possible bottleneck -
    should the disks really be able to put out that much data per second, which
    they don't - is the connection between the motherboard and the SAS
    controller, but in this case that's an 8-lane PCIe connection, so I'm safe
    there. ;-)

    > |> 1. Using hardware, jmicron (creating raid from bios).
    > |> Will gentoo see it ? Is it stable ?
    > |
    > | I do not know the JMicron, but true hardware SATA RAID from a chip on
    > | the motherboard is extremely rare and would also make the board more
    > | expensive than its peers.
    >
    > Seems to be common on server grade motherboards in the many-hundred-dollar
    > ranges.


    Well, I believe my Tyan n6650W motherboard was around 1400 Euro - I'm not
    sure, I'd have to check - and it has a single channel onboard IDE
    controller, an onboard SATA controller with /nvraid/ - i.e. a
    hardware-assisted software RAID from the nVidia nForce Professional chipset
    - plus an onboard non-RAID LSI SAS controller and an onboard Firewire
    controller.

    > | Even if you enable the RAID functionality of an SATA onboard
    > | hardware-assisted software RAID, this will only be seen as a RAID by
    > | Windows, and then will require you to load the appropriate driver for
    > | it, if Windows doesn't have a driver for it by itself.
    > |
    > | In GNU/Linux, such a device is seen as a simple SATA controller,
    > | regardless of whether the RAID functionality is enabled or not, because
    > | basically that's what it is.
    >
    > I'm guessing if it is a "WinRAID" style, it has special queueing commands
    > to at least improve up having N separate SATA ports doing as software
    > RAID.


    I'm not really well-versed on such devices, but you might be correct about
    that.

    > | When a drive has been replaced, most hardware RAID adapters will
    > | automatically rebuild the array - which takes some time and will slow
    > | down the machine's performance somewhat - but in software RAID, I
    > | believe you have to rebuild the array manually using /mdadm./
    >
    > Presumably some kind of RAID manager daemon could run in the background to
    > emulate the hardware RAID by doing this chore when it sees the drive being
    > added.


    Something like that would have to be handled via /udevd,/ I presume.

    > For a project I am studying which will need a lot of reliable file space,
    > I'm looking at using RAID only to expand the storage per server, and doing
    > redundancy between servers, with load balancing between servers as well.
    > Programs would run on the servers to compare each server tree to the
    > others and replicate anything found missing on one or more servers.


    The easiest and probably most reliable solution would be to use hardware
    RAID controllers, then, in my humble opinion. But I presume you're going
    to hook up SATA drives to them? I can't really say that I have much
    experience with those as I've been using SCSI drives for many years
    now. :-/

    --
    *Aragorn*
    (registered GNU/Linux user #223157)

  8. Re: Raid: software or hardware

    vertigo wrote:
    > Hello
    >
    > I have mainboard: gigabyte P965S3.
    > I have 2 SATA interfaces using builtin Jmicron controller
    > (and another 4 SATA interfaces)
    > I upgraded bios to version F12.
    > I want to make mirror raid.
    >
    > What's better to use: software or hardware(jmicron)?


    Software RAID. There are several failure modes that you can experience
    -- drive failure (probably the most common), or controller failure (and
    some others, but we won't get into it).

    If a drive fails, software and hardware RAID are equivalent. Simply
    replace the drive. The md software in Linux will automatically rebuild
    the drive as needed. The "hardware" solution? Who knows, but we presume
    it will do the same.

    The real difference is what happens if you suffer a controller failure.
    Typically, if using hardware RAID (say RAID 5), you will have to replace
    the mainboard/controller with an EXACT match. Which puts a lot of trust
    in your vendor. With Software RAID, you put the drives into a compatible
    Linux box, and things "just work".

    I would stay away from Gentoo for this application, and stick to a
    (larger) distribution (say SuSE). If there is a controller problem, it
    is very likely that the disks can then be moved to a new
    mainboard/controller and even BOOT there (no installation required,
    faster system recovery). Also, the application is purely i/o bound, and
    DMA is used to move the data anyway. Computing RAID 5 crcs may seem to
    be an issue, but any modern processor can do this very quickly (it is
    not a bottleneck anymore, even if done in software).

    I use mirroring for the boot partition, and RAID 5 for the data
    partitions. Linux cannot boot from a RAID 5 volume, but can boot from a
    mirrored volume. I also tend to put 4 drives into a RAID 5 volume. This
    gives 3/4 of the available raw storage for data (4 500 GB drives gives
    1.5TB of available space, with a ability to lose any single drive and
    continue).

    >
    > 1. Using hardware, jmicron (creating raid from bios).
    > Will gentoo see it ? Is it stable ?
    > If one disk fails how will i know ?
    > Will there be no problem with replacing bad disk with new one ?
    >
    > 2. Software raid (using mdadm)
    >
    > What's better ? Has anybody used this jmicron raid ?
    >
    > thanx


  9. Re: Raid: software or hardware

    Fred Weigel wrote:
    > vertigo wrote:
    >> Hello
    >>
    >> I have mainboard: gigabyte P965S3.
    >> I have 2 SATA interfaces using builtin Jmicron controller
    >> (and another 4 SATA interfaces)
    >> I upgraded bios to version F12.
    >> I want to make mirror raid.
    >>
    >> What's better to use: software or hardware(jmicron)?

    >
    > Software RAID. There are several failure modes that you can experience
    > -- drive failure (probably the most common), or controller failure (and
    > some others, but we won't get into it).
    >

    Personally, I've generally preferred hardware RAID. I've had software
    RAID systems not notice a drive failing, causing corruption of the
    filesystem. The better hardware RAID cards seem to be better at sniffing
    out a failing disk and removing it from an array before any problems can
    ensue.

    > If a drive fails, software and hardware RAID are equivalent. Simply
    > replace the drive. The md software in Linux will automatically rebuild
    > the drive as needed. The "hardware" solution? Who knows, but we presume
    > it will do the same.
    >

    All hardware RAID systems I've used do this and more. They'll even
    e-mail you with any alerts on the system.

    > The real difference is what happens if you suffer a controller failure.
    > Typically, if using hardware RAID (say RAID 5), you will have to replace
    > the mainboard/controller with an EXACT match. Which puts a lot of trust
    > in your vendor. With Software RAID, you put the drives into a compatible
    > Linux box, and things "just work".
    >

    I wouldn't have thought this is such a major problem if you use a major
    RAID system vendor. 3Ware or Areca are likely to be around for a while.
    For really important data systems, you should be thinking of having
    spares ready to be installed, or go for a proper external RAID system
    with hot-swap everything and support contracts to match.

    If you intend using a large number of disks, you'll need to be using one
    of these cards to get the requisite number of ports anyway, even if you
    choose to use them as JBOD suppliers to an md RAID.

    Matthew

  10. Re: Raid: software or hardware

    Matthew Wild writes:
    >Anton Ertl wrote:
    >> Matthew Wild writes:
    >>> Personally, I've generally preferred hardware RAID. I've had software
    >>> RAID systems not notice a drive failing,

    >>
    >> Which software RAID was that?
    >>

    >Standard Linux md RAID.
    >
    >>> causing corruption of the
    >>> filesystem.

    >>
    >> How does not noticing a failing drive cause the corruption of the file
    >> system?
    >>

    >Because as one drive is failing it is corrupting any accesses to that disk.

    ....
    >I was just pointing out that while a disk is failing, the software RAID
    >has not noticed and limped along still trying to use the failing disk
    >which happily provides garbage when accessed.


    It's an unusual failure mode if a drive delivers wrong data without
    reporting an error. Why do you believe that the hardware RAID would
    have fared better when confronted with such a drive?

    >In my experience, the
    >hardware RAID systems I have used, 3Ware, Digital/Compaq/HP RA8000,
    >MA8000, MSA1500 (quite horrible to administer), manage their arrays
    >fairly conservatively and drop disks pretty quickly.


    In my experience md is overly conservative and drops drives pretty
    quickly: We had a box that now and then reported IDE errors that the
    kernel could recover from by doing an IDE reset; however, by then md
    had assumed that the drive had failed and had dropped it from the
    array, so we needed to re-add it to the array manually, which was
    pretty annoying. OTOH, maybe this annoyance was good, because we
    eventually fixed the problem.

    - anton
    --
    M. Anton Ertl Some things have to be seen to be believed
    anton@mips.complang.tuwien.ac.at Most things have to be believed to be seen
    http://www.complang.tuwien.ac.at/anton/home.html

+ Reply to Thread