Re: Raid0 or Raid5 for network to disk backup (Gigabit)? - Storage

This is a discussion on Re: Raid0 or Raid5 for network to disk backup (Gigabit)? - Storage ; > Keep in mind that any application that writes enormous files to a > Windows network share will experience gradual but steady performance > degredation over time. This is due to a performance bug in the > Windows itself, and ...

+ Reply to Thread
Page 2 of 2 FirstFirst 1 2
Results 21 to 36 of 36

Thread: Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

  1. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    > Keep in mind that any application that writes enormous files to a
    > Windows network share will experience gradual but steady performance
    > degredation over time. This is due to a performance bug in the
    > Windows itself, and has nothing to do with the application that is
    > writing the data. This can be easily reproduced by writing a simple
    > app that does nothing but constantly write a continuous stream of data
    > to a specified file.


    Exactly so, we have noticed it and measured it.

    This is MS's issue, and is possible related to cache pollution - polluting the
    cache faster then the lazy writer will flush it. Tweaking the cache settings in
    the registry (after finding the MS's KB about them) can be a good idea.

    > the backup image into ~50GB pieces. Most backup apps support
    > splitting the backup image file.


    ShadowProtect surely supports this, and I think Acronis and Norton Ghost/LSR
    too.

    --
    Maxim Shatskih, Windows DDK MVP
    StorageCraft Corporation
    maxim@storagecraft.com
    http://www.storagecraft.com


  2. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    On Apr 4, 12:31 pm, "Maxim S. Shatskih" wrote:
    > > Keep in mind that any application that writes enormous files to a
    > > Windows network share will experience gradual but steady performance
    > > degredation over time. This is due to a performance bug in the
    > > Windows itself, and has nothing to do with the application that is
    > > writing the data. This can be easily reproduced by writing a simple
    > > app that does nothing but constantly write a continuous stream of data
    > > to a specified file.

    >
    > Exactly so, we have noticed it and measured it.
    >
    > This is MS's issue, and is possible related to cache pollution - polluting the
    > cache faster then the lazy writer will flush it. Tweaking the cache settings in
    > the registry (after finding the MS's KB about them) can be a good idea.
    >
    > > the backup image into ~50GB pieces. Most backup apps support
    > > splitting the backup image file.

    >
    > ShadowProtect surely supports this, and I think Acronis and Norton Ghost/LSR
    > too.
    >
    > --
    > Maxim Shatskih, Windows DDK MVP
    > StorageCraft Corporation
    > m...@storagecraft.comhttp://www.storagecraft.com


    Thanks for that tid bit.. I'll either break them up and test again or
    try to find the MS solution.

    Sure enough, ShadowProtect ended up at 9 hours as well.


  3. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    In comp.sys.ibm.pc.hardware.storage markm75 wrote:
    > On Apr 4, 12:31 pm, "Maxim S. Shatskih" wrote:
    >> > Keep in mind that any application that writes enormous files to a
    >> > Windows network share will experience gradual but steady performance
    >> > degredation over time. This is due to a performance bug in the
    >> > Windows itself, and has nothing to do with the application that is
    >> > writing the data. This can be easily reproduced by writing a simple
    >> > app that does nothing but constantly write a continuous stream of data
    >> > to a specified file.

    >>
    >> Exactly so, we have noticed it and measured it.
    >>
    >> This is MS's issue, and is possible related to cache pollution - polluting the
    >> cache faster then the lazy writer will flush it. Tweaking the cache settings in
    >> the registry (after finding the MS's KB about them) can be a good idea.
    >>
    >> > the backup image into ~50GB pieces. Most backup apps support
    >> > splitting the backup image file.

    >>
    >> ShadowProtect surely supports this, and I think Acronis and Norton Ghost/LSR
    >> too.
    >>
    >> --
    >> Maxim Shatskih, Windows DDK MVP
    >> StorageCraft Corporation
    >> m...@storagecraft.comhttp://www.storagecraft.com


    > Thanks for that tid bit.. I'll either break them up and test again or
    > try to find the MS solution.


    > Sure enough, ShadowProtect ended up at 9 hours as well.


    Well, that would explain it. Once again. MS is using substandard
    technology. I hope you find a solution to this, but I certainly
    have ni clus what it could be.

    Arno

  4. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    > Well, that would explain it. Once again. MS is using substandard
    > technology.


    I would not say that SMB slowdown on files >100GB is "substandard" for a mass
    market commodity OS.

    This is a rare corner case in fact, with the image backup software being nearly
    the only users of it, and they can split the image to smaller files.

    Note that lots of UNIX-derived OSes still have 4GB file size limit :-)

    --
    Maxim Shatskih, Windows DDK MVP
    StorageCraft Corporation
    maxim@storagecraft.com
    http://www.storagecraft.com


  5. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih wrote:
    >> Well, that would explain it. Once again. MS is using substandard
    >> technology.


    > I would not say that SMB slowdown on files >100GB is "substandard"
    > for a mass market commodity OS.


    Hmm. I think that if it supports files > 100GB, then it should support
    them without surprises. Of course, if you say ''commodity'' = ''not
    really for mission critical stuff'', then I can agree.

    > This is a rare corner case in fact, with the image backup software
    > being nearly the only users of it, and they can split the image to
    > smaller files.


    > Note that lots of UNIX-derived OSes still have 4GB file size limit :-)


    I wouldn't know. Linux ext2/3 has a 2TB file size limit.

    But that was actually not my point. My point is that if it is
    supported, then it should be supported well. If it is not supported
    that is better than if you think you can use it, but on actual usage
    things start to go wrong. I believe this whole thread shows that ;-)

    So ''substandard'' = ''the features are there but you should not
    really use them to their limits'', a.k.a. ''we did it, but we did not
    really do it right''.

    Arno

  6. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    On Apr 5, 5:23 am, Arno Wagner wrote:
    > In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih wrote:
    >
    > >> Well, that would explain it. Once again. MS is using substandard
    > >> technology.

    > > I would not say that SMB slowdown on files >100GB is "substandard"
    > > for a mass market commodity OS.

    >
    > Hmm. I think that if it supports files > 100GB, then it should support
    > them without surprises. Of course, if you say ''commodity'' = ''not
    > really for mission critical stuff'', then I can agree.
    >
    > > This is a rare corner case in fact, with the image backup software
    > > being nearly the only users of it, and they can split the image to
    > > smaller files.
    > > Note that lots of UNIX-derived OSes still have 4GB file size limit :-)

    >
    > I wouldn't know. Linux ext2/3 has a 2TB file size limit.
    >
    > But that was actually not my point. My point is that if it is
    > supported, then it should be supported well. If it is not supported
    > that is better than if you think you can use it, but on actual usage
    > things start to go wrong. I believe this whole thread shows that ;-)
    >
    > So ''substandard'' = ''the features are there but you should not
    > really use them to their limits'', a.k.a. ''we did it, but we did not
    > really do it right''.
    >
    > Arno


    Results are in.. Used shadow protect.. set to 50GB files at a time on
    the backup file side.. average throughput 23 MB/s.. finished in 4hr 25
    minutes, the same time as a local backup took (this was across
    gigabit).

    So I guess its true.. there is something to the polution of the cache/
    registry issue? Anyone have a KB article where I could find the tweak
    and try this again without splitting the backup files? (Not sure what
    I'm searching for exactly).

    Thanks


  7. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    In comp.sys.ibm.pc.hardware.storage markm75 wrote:
    > On Apr 5, 5:23 am, Arno Wagner wrote:
    >> In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih wrote:
    >>
    >> >> Well, that would explain it. Once again. MS is using substandard
    >> >> technology.
    >> > I would not say that SMB slowdown on files >100GB is "substandard"
    >> > for a mass market commodity OS.

    >>
    >> Hmm. I think that if it supports files > 100GB, then it should support
    >> them without surprises. Of course, if you say ''commodity'' = ''not
    >> really for mission critical stuff'', then I can agree.
    >>
    >> > This is a rare corner case in fact, with the image backup software
    >> > being nearly the only users of it, and they can split the image to
    >> > smaller files.
    >> > Note that lots of UNIX-derived OSes still have 4GB file size limit :-)

    >>
    >> I wouldn't know. Linux ext2/3 has a 2TB file size limit.
    >>
    >> But that was actually not my point. My point is that if it is
    >> supported, then it should be supported well. If it is not supported
    >> that is better than if you think you can use it, but on actual usage
    >> things start to go wrong. I believe this whole thread shows that ;-)
    >>
    >> So ''substandard'' = ''the features are there but you should not
    >> really use them to their limits'', a.k.a. ''we did it, but we did not
    >> really do it right''.
    >>
    >> Arno


    > Results are in.. Used shadow protect.. set to 50GB files at a time on
    > the backup file side.. average throughput 23 MB/s.. finished in 4hr 25
    > minutes, the same time as a local backup took (this was across
    > gigabit).


    Interesting.

    > So I guess its true.. there is something to the polution of the cache/
    > registry issue? Anyone have a KB article where I could find the tweak
    > and try this again without splitting the backup files? (Not sure what
    > I'm searching for exactly).


    Why not just split the backup? This seems to work, after all.
    If you want this a bit better sorted, put each backup set
    into one subdirectory.

    Arno


  8. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    On 5 Apr 2007 09:23:26 GMT, Arno Wagner wrote:

    >In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih wrote:
    >>> Well, that would explain it. Once again. MS is using substandard
    >>> technology.

    >
    >> I would not say that SMB slowdown on files >100GB is "substandard"
    >> for a mass market commodity OS.

    >
    >Hmm. I think that if it supports files > 100GB, then it should support
    >them without surprises. Of course, if you say ''commodity'' = ''not
    >really for mission critical stuff'', then I can agree.
    >
    >> This is a rare corner case in fact, with the image backup software
    >> being nearly the only users of it, and they can split the image to
    >> smaller files.

    >
    >> Note that lots of UNIX-derived OSes still have 4GB file size limit :-)

    >
    >I wouldn't know. Linux ext2/3 has a 2TB file size limit.
    >
    >But that was actually not my point. My point is that if it is
    >supported, then it should be supported well. If it is not supported
    >that is better than if you think you can use it, but on actual usage
    >things start to go wrong. I believe this whole thread shows that ;-)
    >
    >So ''substandard'' = ''the features are there but you should not
    >really use them to their limits'', a.k.a. ''we did it, but we did not
    >really do it right''.
    >
    >Arno



    Careful the trail you blaze.

    The automounter and NFS client subsystems in Linux are beyond
    sub-standard. It exists, and it will work if you do not use it
    heavily.

    I dislike MS more than most, but throwing stones will only break your
    own windows (no pun intended on that one).

    ~F

  9. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    > I wouldn't know. Linux ext2/3 has a 2TB file size limit.

    Sorry. See the cite from include/linux/ext2_fs.h below and "__u32 i_size;" in
    it.

    ext2's limit is 4GB. I remember ext3 being compatible with ext2 in on-disk
    structures in everything except the transaction log, so, looks like ext3 is
    also limited to 4GB per file.

    More so, if you will also find the superblock structure, you will see that ext2
    is also limited to 32bit block numbers in the volume. There are good chances
    that this means the volume size limit of 2TB (if "block" is really the disk
    sector and not a group of sectors).

    /*
    * Structure of an inode on the disk
    */
    struct ext2_inode {
    __u16 i_mode; /* File mode */
    __u16 i_uid; /* Owner Uid */
    __u32 i_size; /* Size in bytes */
    __u32 i_atime; /* Access time */
    __u32 i_ctime; /* Creation time */
    __u32 i_mtime; /* Modification time */
    __u32 i_dtime; /* Deletion Time */
    __u16 i_gid; /* Group Id */
    __u16 i_links_count; /* Links count */
    __u32 i_blocks; /* Blocks count */
    __u32 i_flags; /* File flags */

    > supported, then it should be supported well. If it is not supported
    > that is better than if you think you can use it, but on actual usage
    > things start to go wrong. I believe this whole thread shows that ;-)


    Let's wait for MS's hotfixes and service packs. Such a "corner case"
    (circumstances rarely met in the real life) issues do occur in any software.

    --
    Maxim Shatskih, Windows DDK MVP
    StorageCraft Corporation
    maxim@storagecraft.com
    http://www.storagecraft.com


  10. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    Maxim S. Shatskih wrote:
    >> I wouldn't know. Linux ext2/3 has a 2TB file size limit.

    >
    >Sorry. See the cite from include/linux/ext2_fs.h below and
    >"__u32 i_size;" in it.
    >
    >ext2's limit is 4GB. I remember ext3 being compatible with ext2 in
    >on-disk structures in everything except the transaction log, so,
    >looks like ext3 is also limited to 4GB per file.


    Don't be silly, even a minimal amount of checking would have shown
    this to be false for a long time now. The exact file size limit for
    ext2/ext3 depends on blocksize, with the default 4 kB it's 2 TB (and
    it's been rare to see any other blocksize for a long time now).

    IIRC at least (some?) 2.2 kernels had this, though glibc support on
    32-bit platforms lagged a bit. Since I was on 32-bit platforms back
    then it might well have been MUCH earlier (2.0? 1.2?).

    ext2/ext3 has a system of "features" which can be added, both fully
    compatible and forward compatible flags are available so to avoid
    corruption on incompatible features (ext3 is a set of IIRC two
    options, one which says that it has a journal, one which is set when
    mounted and removed when umounted, this is why EXT2 only mounts
    *clean* EXT3 filesystems). IIRC NTFS has something not that
    dissimilar...

    The feature you are looking for is "large_size", this is set
    automatically when the first >2GB file is created by a kernel which
    supports this. I've not read the code, but the following line from the
    same file you quoted makes me think they stash the upper bits of the
    file size in dir_acl (which probably isn't used for files anyway).

    From include/linux/ext2_fs.h:
    struct ext2_inode {
    ....
    __u32 i_size; /* Size in bytes */
    ....
    __u32 i_dir_acl; /* Directory ACL */
    };
    #define i_size_high i_dir_acl

    The 2 TB file size limit actually comes from i_blocks, Google found a
    patch to extend this but I don't think anyone is really that
    interested at the moment. There are a LARGE number of other
    filesystems that supports this for Linux if someone actually need
    this!


    >More so, if you will also find the superblock structure, you will see
    >that ext2 is also limited to 32bit block numbers in the volume. There
    >are good chances that this means the volume size limit of 2TB (if
    >"block" is really the disk sector and not a group of sectors).


    I have no reason to doubt the statement in Wikipedia and other places
    which for Linux 2.6 means 16 TB for ext3 assuming the standard 4kB
    block size.

    (It depends on block size but unpatched 2.4 and earlier has a hard
    limit at 2TB, not sure if this applies to all 2.4 distributions, some
    were heavily enhanced with some features from 2.6)

    http://en.wikipedia.org/wiki/Ext2
    http://en.wikipedia.org/wiki/Comparison_of_file_systems

  11. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    > ext2/ext3 has a system of "features" which can be added

    So, >4GB files for ext2 is one of these additional features? OK, thanks, will
    know this.

    > *clean* EXT3 filesystems). IIRC NTFS has something not that
    > dissimilar...


    NTFS is more like ReiserFS. From what I've read on ReiserFS design - it is just
    plain remake based on the same ideas as NTFS - attribute streams, B-tree
    directories, MFT etc.

    NTFS just predates ReiserFS by around 10 years, which is a clear sign of
    "substandard technologies used by MS" :-) The only competitors to NTFS that
    time of 1993 were VMS's filesystem and Veritas's product for Solaris.

    > (It depends on block size but unpatched 2.4 and earlier has a hard
    > limit at 2TB


    So, I'm not this wrong. 2TB limit was there very small time ago.

    --
    Maxim Shatskih, Windows DDK MVP
    StorageCraft Corporation
    maxim@storagecraft.com
    http://www.storagecraft.com


  12. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    Maxim S. Shatskih wrote:

    ....

    From what I've read on ReiserFS design - it is just
    > plain remake based on the same ideas as NTFS


    Then you haven't read nearly enough to have a clue.

    - bill

  13. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih wrote:
    >> I wouldn't know. Linux ext2/3 has a 2TB file size limit.


    > Sorry. See the cite from include/linux/ext2_fs.h below and "__u32
    > i_size;" in it.


    > ext2's limit is 4GB. I remember ext3 being compatible with ext2 in on-disk
    > structures in everything except the transaction log, so, looks like ext3 is
    > also limited to 4GB per file.


    Well, yes, if you use a pretty old kernel. Or turn large
    file support off. Standard limit is 2TB at the moment. And
    you don't need to quote kernel source at me, I happen to
    have files > 4G on ext2 at this moment. The inode type
    has been extended some time ago.

    An overview over the current limits of ext2 is, e.g., here:

    http://en.wikipedia.org/wiki/Ext2

    One thing you need to do in your software for it to be able
    to handle the large files is to define
    #define _FILE_OFFSET_BITS 64
    in order for all the relevant types to be 64 bits transparently.
    Note that you need the functions using ''off_t'' for position
    specification.

    > More so, if you will also find the superblock structure, you will
    > see that ext2 is also limited to 32bit block numbers in the
    > volume. There are good chances that this means the volume size limit
    > of 2TB (if "block" is really the disk sector and not a group of
    > sectors).


    Filesystem size currently is 16TB. But you need large block device
    support enabled in the kernel to use that. I think that is not
    yet the default.

    Arno

  14. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih wrote:
    >> ext2/ext3 has a system of "features" which can be added


    > So, >4GB files for ext2 is one of these additional features? OK, thanks, will
    > know this.


    >> *clean* EXT3 filesystems). IIRC NTFS has something not that
    >> dissimilar...


    > NTFS is more like ReiserFS. From what I've read on ReiserFS design - it is just
    > plain remake based on the same ideas as NTFS - attribute streams, B-tree
    > directories, MFT etc.


    > NTFS just predates ReiserFS by around 10 years, which is a clear sign of
    > "substandard technologies used by MS" :-) The only competitors to NTFS that
    > time of 1993 were VMS's filesystem and Veritas's product for Solaris.


    >> (It depends on block size but unpatched 2.4 and earlier has a hard
    >> limit at 2TB


    > So, I'm not this wrong. 2TB limit was there very small time ago.


    Well, the 2.6.0 was published in december 2003. I would not call 4 years
    ''very small time'', considering disk sizes in 2003.

    There are, BTW, some more filesystems available under Linux and
    they are basically all pretty compatible. For really large
    filesystems you would probably not use ext2 anyways, but perhaps
    XFS (which also has been available on Linux since around 2001).
    XFS has a file size limit and filesystem limit of 8 exabytes.

    Arno

  15. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    "Arno Wagner" wrote in message news:57jkd9F2dh888U1@mid.individual.net
    > In comp.sys.ibm.pc.hardware.storage markm75 wrote:
    > > On Apr 4, 12:31 pm, "Maxim S. Shatskih" wrote:
    > > > > Keep in mind that any application that writes enormous files to a
    > > > > Windows network share will experience gradual but steady performance
    > > > > degredation over time. This is due to a performance bug in the
    > > > > Windows itself, and has nothing to do with the application that is
    > > > > writing the data. This can be easily reproduced by writing a simple
    > > > > app that does nothing but constantly write a continuous stream of data
    > > > > to a specified file.
    > > >
    > > > Exactly so, we have noticed it and measured it.
    > > >
    > > > This is MS's issue, and is possible related to cache pollution - polluting the
    > > > cache faster then the lazy writer will flush it. Tweaking the cache settings in
    > > > the registry (after finding the MS's KB about them) can be a good idea.
    > > >
    > > > > the backup image into ~50GB pieces. Most backup apps support
    > > > > splitting the backup image file.
    > > >
    > > > ShadowProtect surely supports this, and I think Acronis and Norton Ghost/LSR
    > > > too.
    > > >
    > > > --
    > > > Maxim Shatskih, Windows DDK MVP
    > > > StorageCraft Corporation
    > > > m...@storagecraft.comhttp://www.storagecraft.com

    >
    > > Thanks for that tid bit.. I'll either break them up and test again or
    > > try to find the MS solution.

    >
    > > Sure enough, ShadowProtect ended up at 9 hours as well.


    > Well, that would explain it. Once again. MS is using substandard technology.


    > I hope you find a solution to this, but I certainly have ni clus what it could be.


    Not uncommon when you have so many brainfarcts as you have. babblebot.

    >
    > Arno


  16. Re: Raid0 or Raid5 for network to disk backup (Gigabit)?

    "Arno Wagner" wrote in message news:57jtgeF2dn5maU1@mid.individual.net
    > In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih wrote:
    > > > Well, that would explain it. Once again. MS is using substandard
    > > > technology.

    >
    > > I would not say that SMB slowdown on files >100GB is "substandard"
    > > for a mass market commodity OS.

    >
    > Hmm. I think that if it supports files > 100GB, then it should support
    > them without surprises. Of course, if you say ''commodity'' = ''not
    > really for mission critical stuff'', then I can agree.
    >
    > > This is a rare corner case in fact, with the image backup software
    > > being nearly the only users of it, and they can split the image to
    > > smaller files.

    >
    > > Note that lots of UNIX-derived OSes still have 4GB file size limit :-)

    >
    > I wouldn't know. Linux ext2/3 has a 2TB file size limit.


    > But that was actually not my point. My point is that if it is
    > supported, then it should be supported well. If it is not supported
    > that is better than if you think you can use it, but on actual usage
    > things start to go wrong.


    Nothing goes 'wrong', you babblebot moron, it only gets slow.

    > I believe this whole thread shows that ;-)


    What this thread shows is that you don't know anything, babblebot, that you
    are just feeding on others for information to badmouth MS, you Lunix zealot.

    >
    > So ''substandard'' = ''the features are there but you should not really use
    > them to their limits'', a.k.a. ''we did it, but we did not really do it right''.


    It's the OS showing it's limits, not the file system.

    >
    > Arno


+ Reply to Thread
Page 2 of 2 FirstFirst 1 2