SCSI vs SATA Hih-Perf - Storage

This is a discussion on SCSI vs SATA Hih-Perf - Storage ; Hello all, Which of the two following architectures would you choose for a high-perf NFS server in a cluster env. Most of our data ( 80% ) is small ( in nature: Architecture 1: Tyan 2882 2xOpteron 246 4 GB ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 24

Thread: SCSI vs SATA Hih-Perf

  1. SCSI vs SATA Hih-Perf

    Hello all,

    Which of the two following architectures would you choose for a
    high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    small ( < 64 kb ) files. Reads and Writes are similar and mostly random
    in nature:

    Architecture 1:
    Tyan 2882
    2xOpteron 246
    4 GB RAM
    2x80Gb SATA ( System )
    2x12-Way 3Ware Cards
    24 73 GB 10k rpm Western Digital Raptors
    Software RAID 10 on Linux 2.6.x
    XFS

    Architecture 2:
    Tyan 2881 with Dual U-320 SCSI
    2xOpteron 246
    4 GB RAM
    2x80Gb SATA ( System )
    12x146Gb Fujitsu 10k SCSI
    Software RAID 10 on Linux
    XFS

    The price for both system is almost the same. Considerations:

    - Number of Spindles: Solution 1 looks like it might have an edge here
    for small sequential reads and writes since there are just twice as
    many spindles.

    - PCI Bus Saturation: Solution 1 also appears to have an edge in case
    we use large sequential reads. Solution 2 would be limited by the Dual
    SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
    bandwidth in any random-read or random-write situation and in our small
    random file scenario I think both system would perform equally. Any
    comments ?

    - MTBF: Solution 2 has a definite edge. Some numbers:

    MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours

    Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours

    MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours

    Not surprisingly Solution 2 is twice as reliabe. This doesn't take
    into account the novelty of the SATA Raptor drive and the proven track
    record of the SCSI solution. In any case comments on this MTBF point
    are welcomed.

    - RAID Performance: I am not sure about this. In principle both
    solution should behave the same since we are using SW RAID but I don't
    know how the fact that SCSI is a bus with overhead would affect RAID
    performance ? What do you think ? Any ideas as to how to spread the
    RAID 10 in a dual U 320 SCSI Scenario ?
    SATA being Point-To-Point appears to have an edge again but your
    thoghts are welcomed.

    - Would I get a considerable edge if I used 15k SCSI Drives ? I am not
    totally convinced that the SATA is our best choice. Any help is greatly
    appreciated.

    Many thanks,

    Parsifal


  2. Re: SCSI vs SATA Hih-Perf

    In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    > Hello all,


    > Which of the two following architectures would you choose for a
    > high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    > small ( < 64 kb ) files. Reads and Writes are similar and mostly random
    > in nature:


    > Architecture 1:
    > Tyan 2882
    > 2xOpteron 246
    > 4 GB RAM
    > 2x80Gb SATA ( System )
    > 2x12-Way 3Ware Cards
    > 24 73 GB 10k rpm Western Digital Raptors
    > Software RAID 10 on Linux 2.6.x
    > XFS


    > Architecture 2:
    > Tyan 2881 with Dual U-320 SCSI
    > 2xOpteron 246
    > 4 GB RAM
    > 2x80Gb SATA ( System )
    > 12x146Gb Fujitsu 10k SCSI
    > Software RAID 10 on Linux
    > XFS


    > The price for both system is almost the same. Considerations:


    > - Number of Spindles: Solution 1 looks like it might have an edge here
    > for small sequential reads and writes since there are just twice as
    > many spindles.


    > - PCI Bus Saturation: Solution 1 also appears to have an edge in case
    > we use large sequential reads. Solution 2 would be limited by the Dual
    > SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
    > bandwidth in any random-read or random-write situation and in our small
    > random file scenario I think both system would perform equally. Any
    > comments ?


    > - MTBF: Solution 2 has a definite edge. Some numbers:


    > MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours


    > Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours


    > MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours


    > Not surprisingly Solution 2 is twice as reliabe. This doesn't take
    > into account the novelty of the SATA Raptor drive and the proven track
    > record of the SCSI solution. In any case comments on this MTBF point
    > are welcomed.


    > - RAID Performance: I am not sure about this. In principle both
    > solution should behave the same since we are using SW RAID but I don't
    > know how the fact that SCSI is a bus with overhead would affect RAID
    > performance ? What do you think ? Any ideas as to how to spread the
    > RAID 10 in a dual U 320 SCSI Scenario ?
    > SATA being Point-To-Point appears to have an edge again but your
    > thoghts are welcomed.


    > - Would I get a considerable edge if I used 15k SCSI Drives ? I am not
    > totally convinced that the SATA is our best choice. Any help is greatly
    > appreciated.


    One thing you can be relatively sure of is that the SCSI controller
    will work well with the mainboard. Also Linux has a long history of
    supporting SCSI, while SATA support is new and still being worked on.

    For you access scenario, SCSI will also be superior, since SCSI
    has supported command queuing for a long time.

    I also would not trust the Raptors as I would trust SCSI drives.
    The SCSI manufacturers know that SCSI customers expect high
    reliability, while the Raptor is more a poor man's race car.

    One more argument: You can put Config 2 on a 550W (redundant)
    PSU, while Config 1 will need something significantly larger,
    also because SATA does not support staggered start-up, while
    SCSI does. Is that already factored into the cost?

    Arno

  3. Re: SCSI vs SATA Hih-Perf

    > Hello all,
    >
    > Which of the two following architectures would you choose for a
    > high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    > small ( < 64 kb ) files. Reads and Writes are similar and mostly random
    > in nature:
    >
    > Architecture 1:
    > Tyan 2882
    > 2xOpteron 246
    > 4 GB RAM
    > 2x80Gb SATA ( System )
    > 2x12-Way 3Ware Cards
    > 24 73 GB 10k rpm Western Digital Raptors
    > Software RAID 10 on Linux 2.6.x
    > XFS
    >
    > Architecture 2:
    > Tyan 2881 with Dual U-320 SCSI
    > 2xOpteron 246
    > 4 GB RAM
    > 2x80Gb SATA ( System )
    > 12x146Gb Fujitsu 10k SCSI
    > Software RAID 10 on Linux
    > XFS
    >
    > The price for both system is almost the same. Considerations:
    >
    > - Number of Spindles: Solution 1 looks like it might have an edge here
    > for small sequential reads and writes since there are just twice as
    > many spindles.


    Yes, but Raptors have 226 IO/s vs. Fujitsu 269 IO/s.

    > - PCI Bus Saturation: Solution 1 also appears to have an edge in case
    > we use large sequential reads. Solution 2 would be limited by the Dual
    > SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
    > bandwidth in any random-read or random-write situation and in our small
    > random file scenario I think both system would perform equally. Any
    > comments ?


    You are designing for NFS, right? Don't forget that network IO and
    SCSI IO are on the same PCI-X 64bit 100MHz bus. Therefore available
    throughput will be 800MB/s * 0.5 = 400MB/s

    In random operations, if you get 200 IO/s from each SCSI disk,
    you will have 12disks * 200 IO/s * 64KB = 154MB/s

    > - MTBF: Solution 2 has a definite edge. Some numbers:
    >
    > MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours
    >
    > Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours
    >
    > MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours


    How did you calculated your total MTBF???
    Your calcs maybe good for RAID0 but not for RAID10.

    Assuming 5 year period, for 1,200,000 hour MTBF disk
    reliabilty is about 0.964.

    For RAID10 (stripe of mirrored drives) in 6x2 configuration
    eqivalent MTBF will be 5,680,000 hours

    Assuming 5 year period, for 1,000,000 hour MTBF disk
    reliabilty is about 0.957.

    For RAID10 (stripe of mirrored drives) in 12x2 configuration
    eqivalent MTBF will be 2,000,000 hours

    For a single RAID1 of the 1,000,000 hr MTBF drives
    equivalent MTBF will be 23,800,000 hours

    BTW, 3Ware controllers are PCI 2.2 64bit 66MHz.
    I can't believe that their MTBF is so low (1,000,000 hr)
    I you loose one, probably your RAID will go down too.

    > Not surprisingly Solution 2 is twice as reliabe. This doesn't take
    > into account the novelty of the SATA Raptor drive and the proven track
    > record of the SCSI solution. In any case comments on this MTBF point
    > are welcomed.
    >
    > - RAID Performance: I am not sure about this. In principle both
    > solution should behave the same since we are using SW RAID but I don't
    > know how the fact that SCSI is a bus with overhead would affect RAID
    > performance ? What do you think ? Any ideas as to how to spread the
    > RAID 10 in a dual U 320 SCSI Scenario ?
    > SATA being Point-To-Point appears to have an edge again but your
    > thoghts are welcomed.
    >
    > - Would I get a considerable edge if I used 15k SCSI Drives ?


    In theory up to 40%.

    > I am not
    > totally convinced that the SATA is our best choice.


    Agree.

    > Any help is greatly
    > appreciated.
    >
    > Many thanks,
    >
    > Parsifal
    >




  4. Re: SCSI vs SATA Hih-Perf

    Arno Wagner wrote:

    > In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    >> Hello all,

    >
    >> Which of the two following architectures would you choose for a
    >> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    >> small ( < 64 kb ) files. Reads and Writes are similar and mostly random
    >> in nature:

    >
    >> Architecture 1:
    >> Tyan 2882
    >> 2xOpteron 246
    >> 4 GB RAM
    >> 2x80Gb SATA ( System )
    >> 2x12-Way 3Ware Cards
    >> 24 73 GB 10k rpm Western Digital Raptors
    >> Software RAID 10 on Linux 2.6.x
    >> XFS

    >
    >> Architecture 2:
    >> Tyan 2881 with Dual U-320 SCSI
    >> 2xOpteron 246
    >> 4 GB RAM
    >> 2x80Gb SATA ( System )
    >> 12x146Gb Fujitsu 10k SCSI
    >> Software RAID 10 on Linux
    >> XFS

    >
    >> The price for both system is almost the same. Considerations:

    >
    >> - Number of Spindles: Solution 1 looks like it might have an edge here
    >> for small sequential reads and writes since there are just twice as
    >> many spindles.

    >
    >> - PCI Bus Saturation: Solution 1 also appears to have an edge in case
    >> we use large sequential reads. Solution 2 would be limited by the Dual
    >> SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
    >> bandwidth in any random-read or random-write situation and in our small
    >> random file scenario I think both system would perform equally. Any
    >> comments ?

    >
    >> - MTBF: Solution 2 has a definite edge. Some numbers:

    >
    >> MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours

    >
    >> Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours

    >
    >> MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours

    >
    >> Not surprisingly Solution 2 is twice as reliabe. This doesn't take
    >> into account the novelty of the SATA Raptor drive and the proven track
    >> record of the SCSI solution. In any case comments on this MTBF point
    >> are welcomed.

    >
    >> - RAID Performance: I am not sure about this. In principle both
    >> solution should behave the same since we are using SW RAID but I don't
    >> know how the fact that SCSI is a bus with overhead would affect RAID
    >> performance ? What do you think ? Any ideas as to how to spread the
    >> RAID 10 in a dual U 320 SCSI Scenario ?
    >> SATA being Point-To-Point appears to have an edge again but your
    >> thoghts are welcomed.

    >
    >> - Would I get a considerable edge if I used 15k SCSI Drives ? I am not
    >> totally convinced that the SATA is our best choice. Any help is greatly
    >> appreciated.

    >
    > One thing you can be relatively sure of is that the SCSI controller
    > will work well with the mainboard. Also Linux has a long history of
    > supporting SCSI, while SATA support is new and still being worked on.


    If he's using 3ware host adapters then "SATA support" is not an
    issue--that's handled by the processor on the host adapter and all that the
    Linux driver does is give commands to that processor.

    Do you have any evidence to present that suggests that 3ware RAID
    controllers have problems with any known mainboard?

    > For you access scenario, SCSI will also be superior, since SCSI
    > has supported command queuing for a long time.


    I'm sorry, but it doesn't follow that because SCSI has supported command
    queuing for a long time that the performance will be superior.

    > I also would not trust the Raptors as I would trust SCSI drives.
    > The SCSI manufacturers know that SCSI customers expect high
    > reliability, while the Raptor is more a poor man's race car.


    Actually a Raptor is an enterprise SCSI drive with an SATA chip on it
    instead of a SCSI chip on it. The Raptors aren't "poor man's" _anything_,
    they're Western Digital's enterprise drive. WD has chosen to take a risk
    and make their enterprise line with SATA instead of SCSI. Are you
    suggesting that WD is incapable of producing a reliable drive?

    If it was a Seagate Cheetah with an SATA chip would you say that it was
    going to be unreliable?

    > One more argument: You can put Config 2 on a 550W (redundant)
    > PSU, while Config 1 will need something significantly larger,
    > also because SATA does not support staggered start-up, while
    > SCSI does. Is that already factored into the cost?


    Uh, SATA requires one host interface for each drive. Whatever processor is
    controlling those host interfaces can most assuredly stagger the startup if
    that is an issue.

    Not saying that SCSI is not the superior solution but the reasons given seem
    to be ignoring the fact that a "smart" SATA RAID controller is being
    compared with a "dumb" SCSI setup.

    > Arno


    --
    --John
    to email, dial "usenet" and validate
    (was jclarke at eye bee em dot net)

  5. Re: SCSI vs SATA Hih-Perf

    Arno Wagner wrote:
    > In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:


    >
    > One thing you can be relatively sure of is that the SCSI controller
    > will work well with the mainboard. Also Linux has a long history of
    > supporting SCSI, while SATA support is new and still being worked on.
    >
    > For you access scenario, SCSI will also be superior, since SCSI
    > has supported command queuing for a long time.
    >
    > I also would not trust the Raptors as I would trust SCSI drives.
    > The SCSI manufacturers know that SCSI customers expect high
    > reliability, while the Raptor is more a poor man's race car.



    My main concern is their novelty, rather then their performance. Call
    it a hunch but it just doesn't feel right to risk it while there's a
    proven solid SCSI solution for the same price.

    >
    > One more argument: You can put Config 2 on a 550W (redundant)
    > PSU, while Config 1 will need something significantly larger,


    Thanks for your comments. I forgot about the Power. Definitely worth
    considering since we're getting 3 of these servers and UPS sizing
    should also play in the cost equation.


    > also because SATA does not support staggered start-up, while
    > SCSI does. Is that already factored into the cost?


    This I don't follow, what's staggered start-up ?

    Parsifal



    >
    > Arno



  6. Re: SCSI vs SATA Hih-Perf

    On 26 Mar 2005 01:01:12 -0800, lmanna@gmail.com wrote:


    >> also because SATA does not support staggered start-up, while
    >> SCSI does. Is that already factored into the cost?

    >
    > This I don't follow, what's staggered start-up ?
    >


    It is a feature that staggers the spinup of each disk sequentially
    leaving enough time between disk starts to prevent overloading the
    power supply. I think he meant that because he believed SATA does not
    do this you would need a beefier power supply than you would with the
    scsi setup to avoid problems on powerup.

    AFAIK delay start or staggered spinup (whatever you want to call it)
    is available on SATA but it is controller specific (& most don't
    support it) and it is not a standard feature like the delay start &
    remote start jumpers on scsi drives & backplanes.

  7. Re: SCSI vs SATA Hih-Perf


    Peter wrote:
    [ Stuff Deleted ]
    > > - Number of Spindles: Solution 1 looks like it might have an edge

    here
    > > for small sequential reads and writes since there are just twice as
    > > many spindles.

    >
    > Yes, but Raptors have 226 IO/s vs. Fujitsu 269 IO/s.


    Yeap ! I like those Fujitsus and they are cheaper then the cheetahs.

    >
    > > - PCI Bus Saturation: Solution 1 also appears to have an edge in

    case
    > > we use large sequential reads. Solution 2 would be limited by the

    Dual
    > > SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
    > > bandwidth in any random-read or random-write situation and in our

    small
    > > random file scenario I think both system would perform equally. Any
    > > comments ?

    >
    > You are designing for NFS, right? Don't forget that network IO and
    > SCSI IO are on the same PCI-X 64bit 100MHz bus. Therefore available
    > throughput will be 800MB/s * 0.5 = 400MB/s


    Uhmm .. you're right. I guess I'll place a dual e1000 on the other
    PCI-X
    channel. See:

    ftp://ftp.tyan.com/datasheets/d_s2881_100.pdf


    >
    > In random operations, if you get 200 IO/s from each SCSI disk,
    > you will have 12disks * 200 IO/s * 64KB = 154MB/s
    >
    > > - MTBF: Solution 2 has a definite edge. Some numbers:
    > >
    > > MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours
    > >
    > > Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours
    > >
    > > MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours

    >
    > How did you calculated your total MTBF???
    > Your calcs maybe good for RAID0 but not for RAID10.


    Thanks for the correction. You're right again.

    >
    > Assuming 5 year period, for 1,200,000 hour MTBF disk
    > reliabilty is about 0.964.
    >
    > For RAID10 (stripe of mirrored drives) in 6x2 configuration
    > eqivalent MTBF will be 5,680,000 hours
    >
    > Assuming 5 year period, for 1,000,000 hour MTBF disk
    > reliabilty is about 0.957.
    >
    > For RAID10 (stripe of mirrored drives) in 12x2 configuration
    > eqivalent MTBF will be 2,000,000 hours
    >
    > For a single RAID1 of the 1,000,000 hr MTBF drives
    > equivalent MTBF will be 23,800,000 hours


    Excuse my ignorance but how did you get these numbers ? In any case
    your numbers show that MTBF with solution 1 is about 1/2 than solution
    2.

    >
    > BTW, 3Ware controllers are PCI 2.2 64bit 66MHz.
    > I can't believe that their MTBF is so low (1,000,000 hr)
    > I you loose one, probably your RAID will go down too.


    I thought it was a bit too low too but there was no info on the 3ware
    site.

    >
    > > Not surprisingly Solution 2 is twice as reliabe. This doesn't

    take
    > > into account the novelty of the SATA Raptor drive and the proven

    track
    > > record of the SCSI solution. In any case comments on this MTBF

    point
    > > are welcomed.
    > >
    > > - RAID Performance: I am not sure about this. In principle both
    > > solution should behave the same since we are using SW RAID but I

    don't
    > > know how the fact that SCSI is a bus with overhead would affect

    RAID
    > > performance ? What do you think ? Any ideas as to how to spread

    the
    > > RAID 10 in a dual U 320 SCSI Scenario ?
    > > SATA being Point-To-Point appears to have an edge again but your
    > > thoghts are welcomed.
    > >
    > > - Would I get a considerable edge if I used 15k SCSI Drives ?

    >
    > In theory up to 40%.


    In reality though I would say 25-35%

    >
    > > I am not
    > > totally convinced that the SATA is our best choice.

    >
    > Agree.


    Thanks !

    >
    > > Any help is greatly
    > > appreciated.
    > >
    > > Many thanks,
    > >
    > > Parsifal
    > >



  8. Re: SCSI vs SATA Hih-Perf

    lmanna@gmail.com wrote:
    > Hello all,
    >
    > Which of the two following architectures would you choose for a
    > high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    > small ( < 64 kb ) files. Reads and Writes are similar and mostly
    > random in nature:


    I wouldn't use either one of them since your major flaw would be using an
    Opteron when you should only be using Xeon or Itanium2 processors. Now, if
    you are just putting an MP3 server in the basement of your home for
    light-duty work you can squeak by with the Opterons. As for the drives, I
    would only use SCSI in the system you mention.



    Rita




  9. Re: SCSI vs SATA Hih-Perf

    In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    > Arno Wagner wrote:
    >> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:


    >>
    >> One thing you can be relatively sure of is that the SCSI controller
    >> will work well with the mainboard. Also Linux has a long history of
    >> supporting SCSI, while SATA support is new and still being worked on.
    >>
    >> For you access scenario, SCSI will also be superior, since SCSI
    >> has supported command queuing for a long time.
    >>
    >> I also would not trust the Raptors as I would trust SCSI drives.
    >> The SCSI manufacturers know that SCSI customers expect high
    >> reliability, while the Raptor is more a poor man's race car.



    > My main concern is their novelty, rather then their performance. Call
    > it a hunch but it just doesn't feel right to risk it while there's a
    > proven solid SCSI solution for the same price.


    >>
    >> One more argument: You can put Config 2 on a 550W (redundant)
    >> PSU, while Config 1 will need something significantly larger,


    > Thanks for your comments. I forgot about the Power. Definitely worth
    > considering since we're getting 3 of these servers and UPS sizing
    > should also play in the cost equation.


    Power is critical to reliability. If you have a PSU with, say
    50% normal and 70% peak load, that is massively more reliable than
    one with 70%/100%. Also many PSUs die on start-up, since e.g.
    disks draw their peak currents on spindle start.

    >> also because SATA does not support staggered start-up, while
    >> SCSI does. Is that already factored into the cost?


    > This I don't follow, what's staggered start-up ?


    You can jumper most (all?) SCSI drive do delay their spindle-start.
    Spindle start results in a massive amount of poerrt drawn for some
    seconds. Maybe as much as 2-3 times the peaks you see during operation.

    SCSI drives can be jumperd to spin-up on power-on or on receiving
    a start-unit command. Some also support delays. You should be
    able to set the SCSI controller to issue the start-unit command
    to the drives with, say, 5 seconds delay between each unit or so.
    This massively reduces power drawn on start-up.

    SATA drives all (?) do spin-up on power-on. It is a problem
    when you have many disks. The PSU needs the reserves to deal
    with this worst case.

    Arno

  10. Re: SCSI vs SATA Hih-Perf

    In comp.sys.ibm.pc.hardware.storage "Rita Berkowitz" wrote:
    > lmanna@gmail.com wrote:
    >> Hello all,
    >>
    >> Which of the two following architectures would you choose for a
    >> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    >> small ( < 64 kb ) files. Reads and Writes are similar and mostly
    >> random in nature:


    > I wouldn't use either one of them since your major flaw would be using an
    > Opteron when you should only be using Xeon or Itanium2 processors.


    Sorry, but that is BS. Itanium is mostly dead technology and not
    really developed anymore. It is also massively over-priced. Xeons are
    sort of not-quite 64 bit CPUs, that have the main characteristic of
    being Intel and expensive.

    I also know of no indications (except marketing BS by Intel) that
    Opterons are unreliable.

    Arno





  11. Re: SCSI vs SATA Hih-Perf

    Arno Wagner wrote:

    > Sorry, but that is BS. Itanium is mostly dead technology and not
    > really developed anymore. It is also massively over-priced. Xeons are
    > sort of not-quite 64 bit CPUs, that have the main characteristic of
    > being Intel and expensive.


    You need to catch up with the times. You are correct about the original
    Itaniums being dogs, but I'm talking about the new Itanium2 processors,
    which are also 64-bit. As for Intel being expensive, you get what you pay
    for. The new Itanium2 sytems are SWEEEEEEET!

    > I also know of no indications (except marketing BS by Intel) that
    > Opterons are unreliable.


    It's being proven in the field daily. You simple don't see Opteron based
    solutions being deployed by major commercial and governmental entities.
    True, there are a few *novelty* systems that use many Opteron processors,
    but they are merely a curiosity than the mainstream norm. That said, if I
    wanted a dirt-cheap gaming system I would opt for an Opteron based SATA box.





    Rita




  12. Re: SCSI vs SATA Hih-Perf

    Arno Wagner wrote:

    > In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    >> Arno Wagner wrote:
    >>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:

    >
    >>>
    >>> One thing you can be relatively sure of is that the SCSI controller
    >>> will work well with the mainboard. Also Linux has a long history of
    >>> supporting SCSI, while SATA support is new and still being worked on.
    >>>
    >>> For you access scenario, SCSI will also be superior, since SCSI
    >>> has supported command queuing for a long time.
    >>>
    >>> I also would not trust the Raptors as I would trust SCSI drives.
    >>> The SCSI manufacturers know that SCSI customers expect high
    >>> reliability, while the Raptor is more a poor man's race car.

    >
    >
    >> My main concern is their novelty, rather then their performance. Call
    >> it a hunch but it just doesn't feel right to risk it while there's a
    >> proven solid SCSI solution for the same price.

    >
    >>>
    >>> One more argument: You can put Config 2 on a 550W (redundant)
    >>> PSU, while Config 1 will need something significantly larger,

    >
    >> Thanks for your comments. I forgot about the Power. Definitely worth
    >> considering since we're getting 3 of these servers and UPS sizing
    >> should also play in the cost equation.

    >
    > Power is critical to reliability. If you have a PSU with, say
    > 50% normal and 70% peak load, that is massively more reliable than
    > one with 70%/100%. Also many PSUs die on start-up, since e.g.
    > disks draw their peak currents on spindle start.
    >
    >>> also because SATA does not support staggered start-up, while
    >>> SCSI does. Is that already factored into the cost?

    >
    >> This I don't follow, what's staggered start-up ?

    >
    > You can jumper most (all?) SCSI drive do delay their spindle-start.
    > Spindle start results in a massive amount of poerrt drawn for some
    > seconds. Maybe as much as 2-3 times the peaks you see during operation.
    >
    > SCSI drives can be jumperd to spin-up on power-on or on receiving
    > a start-unit command. Some also support delays. You should be
    > able to set the SCSI controller to issue the start-unit command
    > to the drives with, say, 5 seconds delay between each unit or so.
    > This massively reduces power drawn on start-up.
    >
    > SATA drives all (?) do spin-up on power-on. It is a problem
    > when you have many disks. The PSU needs the reserves to deal
    > with this worst case.


    Would you do the world a favor and actually take ten minutes to research
    your statements before you make them? All SATA drives sold as "enterprise"
    drives have the ability to perform staggered spinup.

    > Arno


    --
    --John
    to email, dial "usenet" and validate
    (was jclarke at eye bee em dot net)

  13. Re: SCSI vs SATA Hih-Perf


    Opteron is not a processor to be taken seriously ???? Any backing
    with hard numbers for what you're saying ? We have a whole 64-node dual
    opteron cluster running 64-bit applications for more than a year and
    it's been not only reliable but given the nature of our applications
    crucial in a time when Intel was sleeping in their 32-bit laurels and
    convincing the industry and neophytes that 64-bit equals Itanium only.
    I applaud AMD for their screw-intel approach giving floks like us a
    great cost-effective 64 bit option. If the Opteron wasn't succesfull
    Intel would have never come up with the 64-bit Xeon, their mantra would
    have been "Buy Itanium". Have you tried to cost out a 64-node dual
    Itanic lately ?? Moreover, our current file-servers are Xeon based and
    we don't feel confident on their running 64-bit OS and/or XFS.

    The only consideration I had for the Xeons was their wider choice of
    mobo availability, and the new boards with 4x, 8x and 16x PCI-Express
    options which might prevent PCI bus saturation in some extreme video
    streaming or large sequential reads applications, which is not the case
    in our scenario. You might also need 10GB ethernet to cope with such
    data stream.

    Parsifal


  14. Re: SCSI vs SATA Hih-Perf

    In article <114b3ubcrc6am5e@news.supernews.com>,
    "Rita Berkowitz" wrote:

    > Arno Wagner wrote:
    >
    > > Sorry, but that is BS. Itanium is mostly dead technology and not
    > > really developed anymore. It is also massively over-priced. Xeons are
    > > sort of not-quite 64 bit CPUs, that have the main characteristic of
    > > being Intel and expensive.

    >
    > You need to catch up with the times. You are correct about the original
    > Itaniums being dogs, but I'm talking about the new Itanium2 processors,
    > which are also 64-bit. As for Intel being expensive, you get what you pay
    > for. The new Itanium2 sytems are SWEEEEEEET!
    >
    > > I also know of no indications (except marketing BS by Intel) that
    > > Opterons are unreliable.

    >
    > It's being proven in the field daily. You simple don't see Opteron based
    > solutions being deployed by major commercial and governmental entities.
    > True, there are a few *novelty* systems that use many Opteron processors,
    > but they are merely a curiosity than the mainstream norm. That said, if I
    > wanted a dirt-cheap gaming system I would opt for an Opteron based SATA box.


    April Fool's a week early?

  15. Re: SCSI vs SATA Hih-Perf

    In comp.sys.ibm.pc.hardware.storage flux wrote:
    > In article <114b3ubcrc6am5e@news.supernews.com>,
    > "Rita Berkowitz" wrote:


    >> Arno Wagner wrote:
    >>
    >> > Sorry, but that is BS. Itanium is mostly dead technology and not
    >> > really developed anymore. It is also massively over-priced. Xeons are
    >> > sort of not-quite 64 bit CPUs, that have the main characteristic of
    >> > being Intel and expensive.

    >>
    >> You need to catch up with the times. You are correct about the original
    >> Itaniums being dogs, but I'm talking about the new Itanium2 processors,
    >> which are also 64-bit. As for Intel being expensive, you get what you pay
    >> for. The new Itanium2 sytems are SWEEEEEEET!
    >>
    >> > I also know of no indications (except marketing BS by Intel) that
    >> > Opterons are unreliable.

    >>
    >> It's being proven in the field daily. You simple don't see Opteron based
    >> solutions being deployed by major commercial and governmental entities.
    >> True, there are a few *novelty* systems that use many Opteron processors,
    >> but they are merely a curiosity than the mainstream norm. That said, if I
    >> wanted a dirt-cheap gaming system I would opt for an Opteron based SATA box.


    > April Fool's a week early?


    Probably suppressed machine rage. I know I have some. But then what
    do I know, I use AMD CPUs and cheap drives. Probably deserve all
    the problems I have ;-)

    Arno

  16. Re: SCSI vs SATA Hih-Perf

    > As somebody with now perhaps ~10 CPU years actual usage on AMD CPUs
    > (mostly Athlons) under Linux I cannot agree. I have had troubles, but
    > not a single problem because of the CPUs.


    Usually, when people are speaking about "Athlons are worse", this is due to
    worse qualities of _chipsets and mobos_, and not the AMD's CPUs themselves.

    VIA chipsets were traditionally worse then Intel ones - for instance, in terms
    of lame ACPI support.

    --
    Maxim Shatskih, Windows DDK MVP
    StorageCraft Corporation
    maxim@storagecraft.com
    http://www.storagecraft.com



  17. Re: SCSI vs SATA Hih-Perf

    In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih wrote:
    >> As somebody with now perhaps ~10 CPU years actual usage on AMD CPUs
    >> (mostly Athlons) under Linux I cannot agree. I have had troubles, but
    >> not a single problem because of the CPUs.


    > Usually, when people are speaking about "Athlons are worse", this is due to
    > worse qualities of _chipsets and mobos_, and not the AMD's CPUs themselves.


    In the beginning that was certainly true, especially as AMD chipsets
    did not get as much R&D as the Intel ones because of low market share.
    I think it is past.

    > VIA chipsets were traditionally worse then Intel ones - for
    > instance, in terms of lame ACPI support.


    Agreed. I had those problems. In fact I believe it became
    usable only recently. However it is not really needed on
    a server.

    Arno



  18. Re: SCSI vs SATA Hih-Perf

    On Sat, 26 Mar 2005 06:48:01 -0500, "Rita Berkowitz" @aol.com> wrote:

    >lmanna@gmail.com wrote:
    >> Hello all,
    >>
    >> Which of the two following architectures would you choose for a
    >> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    >> small ( < 64 kb ) files. Reads and Writes are similar and mostly
    >> random in nature:

    >
    >I wouldn't use either one of them since your major flaw would be using an
    >Opteron when you should only be using Xeon or Itanium2 processors. Now, if
    >you are just putting an MP3 server in the basement of your home for
    >light-duty work you can squeak by with the Opterons. As for the drives, I
    >would only use SCSI in the system you mention.


    Rita,

    You've got to be the most predictable poster on usenet. Many of us
    would choke if you ever made different points, or sold Intel & scsi
    based machines without trolling.

  19. Re: SCSI vs SATA Hih-Perf

    Curious George wrote:

    > You've got to be the most predictable poster on usenet. Many of us
    > would choke if you ever made different points, or sold Intel & scsi
    > based machines without trolling.


    LOL! Is there really anything else worth using besides an Intel based SCSI
    system? Point made!



    Rita




  20. Re: SCSI vs SATA Hih-Perf

    On Sun, 27 Mar 2005 06:48:05 -0500, "Rita Berkowitz" @aol.com> wrote:

    >Curious George wrote:
    >
    >> You've got to be the most predictable poster on usenet. Many of us
    >> would choke if you ever made different points, or sold Intel & scsi
    >> based machines without trolling.

    >
    >LOL! Is there really anything else worth using besides an Intel based SCSI
    >system? Point made!



    For x86 "Servers" & "Workstations" I'm also a Supermicro slut and a
    Seagate SCSI bigot. These are safe bets for a stable, reliable
    platform; their quality & consistency makes integration easy & yields
    good value. Believe it or not, though, there are other worthwhile
    things & an inflexible one size fits all approach is inherently
    flawed. But that's not my point. Your answer proved it nonetheless.

+ Reply to Thread
Page 1 of 2 1 2 LastLast