How come Linux/Unix filesystems don't seem to get fragmented? - Aix

This is a discussion on How come Linux/Unix filesystems don't seem to get fragmented? - Aix ; I am an experienced sysadmin. Back in my days at MIT, we had to order a defragmenter for VAX/VMS, and run it frequently, as those machines got unusable due to fragmentation (well, that was the most abused disk I have ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 23

Thread: How come Linux/Unix filesystems don't seem to get fragmented?

  1. How come Linux/Unix filesystems don't seem to get fragmented?


    I am an experienced sysadmin. Back in my days at MIT, we had to order
    a defragmenter for VAX/VMS, and run it frequently, as those machines
    got unusable due to fragmentation (well, that was the most abused disk
    I have ever seen).

    Likewise, I keep on seeing disk fragmentation on Windows. Don't
    remember any Macs having that problem.

    Well, the point is that in some 15 years being in charge of Unix
    (Solaris, IBM AIX, HP-UX, Linux, even friggin' SCO) I don't recall
    ever having problems caused by fragmentation.

    -Ramon


  2. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    On 2007-12-01, Ramon F Herrera wrote:

    > Well, the point is that in some 15 years being in charge of
    > Unix (Solaris, IBM AIX, HP-UX, Linux, even friggin' SCO) I
    > don't recall ever having problems caused by fragmentation.


    Competently designed file systems.

    --
    Grant Edwards grante Yow! Now KEN is having
    at a MENTAL CRISIS beacuse
    visi.com his "R.V." PAYMENTS are
    OVER-DUE!!

  3. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    On Dec 1, 3:26 pm, Ramon F Herrera wrote:
    >
    > Well, the point is that in some 15 years being in charge of Unix
    > (Solaris, IBM AIX, HP-UX, Linux, even friggin' SCO) I don't recall
    > ever having problems caused by fragmentation.


    I think that's because the filesystems go out of their way to avoid
    fragmentation. There used to be a very nice paper on the design of
    the Berkely FFS (from which Solaris's UFS is descended) which
    described various techniques it used. I believe that the previous
    Unix FS, as used in 7th edition, *did* get fragmented. I suspect the
    underlying technique is being willing, on occasion, to gratuitously
    copy parts of a file so that it becomes contiguous, or more nearly
    contiguous, again, but there are also tricks such as keeping
    directories near their contents and so on.

    However Unix filesystems can (or could until fairly recently) suffer
    fragmentation under really bad load conditions. The classic nasty
    case is something with lots of small files which is almost always
    nearly full, and which has a significant turnover of files - usenet
    news spools were a good instance of this.

    --tim

  4. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    Ramon F Herrera wrote:
    > I am an experienced sysadmin. Back in my days at MIT, we had to order
    > a defragmenter for VAX/VMS, and run it frequently, as those machines
    > got unusable due to fragmentation (well, that was the most abused disk
    > I have ever seen).
    >
    > Likewise, I keep on seeing disk fragmentation on Windows. Don't
    > remember any Macs having that problem.
    >
    > Well, the point is that in some 15 years being in charge of Unix
    > (Solaris, IBM AIX, HP-UX, Linux, even friggin' SCO) I don't recall
    > ever having problems caused by fragmentation.
    >
    > -Ramon
    >




    hmm, fragmentation does mean that the blocks which hold a file's content
    are not located consecutively on the disk, right? How do you come to the
    conclusion that current unix FS avoid fragmenting files? Simply have two
    processes write two different files on the same jfs2 filesystem at the
    same time, growing the files. Check with fileplace(1) on AIX where the
    files' blocks are located. They are fragmented. I have not come across
    the case where fragmentation caused any problems, but jfs2 on AIX does
    not avoid fragmentation in this case.

    Regards
    Joachim Gann

    fwup to comp.unix.aix

  5. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    Ramon F Herrera wrote:
    > I am an experienced sysadmin. Back in my days at MIT, we had to order
    > a defragmenter for VAX/VMS, and run it frequently, as those machines
    > got unusable due to fragmentation (well, that was the most abused disk
    > I have ever seen).
    >
    > Likewise, I keep on seeing disk fragmentation on Windows. Don't
    > remember any Macs having that problem.
    >
    > Well, the point is that in some 15 years being in charge of Unix
    > (Solaris, IBM AIX, HP-UX, Linux, even friggin' SCO) I don't recall
    > ever having problems caused by fragmentation.
    >
    > -Ramon
    >


    They don't?? Have you ever tried to figure out exactly where your data
    is stored?


  6. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    Ramon F Herrera wrote:
    > I am an experienced sysadmin. Back in my days at MIT, we had to order
    > a defragmenter for VAX/VMS, and run it frequently, as those machines
    > got unusable due to fragmentation (well, that was the most abused disk
    > I have ever seen).
    >
    > Likewise, I keep on seeing disk fragmentation on Windows. Don't
    > remember any Macs having that problem.
    >
    > Well, the point is that in some 15 years being in charge of Unix
    > (Solaris, IBM AIX, HP-UX, Linux, even friggin' SCO) I don't recall
    > ever having problems caused by fragmentation.
    >
    > -Ramon
    >

    They get fragmented but a good disk caching algorithm means it matters a
    lot less and happens a lot less.

  7. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    In article ,
    Joachim Gann wrote:

    > Ramon F Herrera wrote:
    > > I am an experienced sysadmin. Back in my days at MIT, we had to order
    > > a defragmenter for VAX/VMS, and run it frequently, as those machines
    > > got unusable due to fragmentation (well, that was the most abused disk
    > > I have ever seen).
    > >
    > > Likewise, I keep on seeing disk fragmentation on Windows. Don't
    > > remember any Macs having that problem.
    > >
    > > Well, the point is that in some 15 years being in charge of Unix
    > > (Solaris, IBM AIX, HP-UX, Linux, even friggin' SCO) I don't recall
    > > ever having problems caused by fragmentation.
    > >
    > > -Ramon
    > >

    >
    >
    >
    > hmm, fragmentation does mean that the blocks which hold a file's content
    > are not located consecutively on the disk, right? How do you come to the
    > conclusion that current unix FS avoid fragmenting files? Simply have two
    > processes write two different files on the same jfs2 filesystem at the
    > same time, growing the files. Check with fileplace(1) on AIX where the
    > files' blocks are located. They are fragmented. I have not come across
    > the case where fragmentation caused any problems, but jfs2 on AIX does
    > not avoid fragmentation in this case.
    >
    > Regards
    > Joachim Gann
    >
    > fwup to comp.unix.aix


    I think file fragmentation is more general than "the blocks aren't all
    contiguously allocated". If an inode's triple datablock is used, is the
    file really fragmented (it would have to be > 32GB to start using
    those). See http://www.tux4u.nl/freedocs/unix/draw/inode.pdf

    IIRC, VMS' Files-11 structure only had a volume index (INDEX.SYS) which
    mapped the allocation of blocks on a disk and entries in directory files
    held the equivalent of UNIX inodes (metadata like create, modify, and
    backup TOD, permissions, ownership, ACLs, disk allocation map). I have
    no idea if you have to defragment the newer "transaction" or journalling
    filesystem that's probably used with newer OpenVMS systems. I haven't
    touched a VAX since 1995 when disks where sub-GB even with an HSC
    cluster disk farm. Anyway, it's easy to see how performance of a
    Files-11 type disk would degrade when big files weren't

    It's easy to create hard links on UNIX volumes because the volume has
    the inodes preallocated when you newfs the volume. The kernel manages
    the filesystem and coordinates stuff like file deletion. VMS used a
    subset of it's "kernel" running in Executive mode called RMS-11 which
    did all the file stuff, including I/O and _record management_ for files
    that weren't just "a stream of bits".

    --
    DeeDee, don't press that button! DeeDee! NO! Dee...




  8. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    Here in comp.os.linux.misc,
    Ramon F Herrera spake unto us, saying:

    >Well, the point is that in some 15 years being in charge of Unix
    >(Solaris, IBM AIX, HP-UX, Linux, even friggin' SCO) I don't recall
    >ever having problems caused by fragmentation.


    OS/2's HPFS filesystem (designed, ironically, by Gordon Letwin at
    Microsoft) largely avoids fragmentation issues as well.

    I can't come up with a technical reason for Microsoft not wanting to
    use more fragmentation-resistant filesystems.

    A lack of caring, perhaps. :-)

    --
    -Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Mableton, GA USA
    Mainframe/Unix bit twiddler by day, OS/2+Linux+DOS hobbyist by night.
    WARNING: I've seen FIELDATA FORTRAN V and I know how to use it!
    The Theorem Theorem: If If, Then Then.

  9. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    On Dec 1, 5:23 pm, "Richard B. Gilbert"
    wrote:

    >
    > They don't?? Have you ever tried to figure out exactly where your data
    > is stored?


    No one is claiming that these filesystems never fragment files: it's
    obvious that they must do so. What is being argued is that these
    filesytems do rather a good job of minimising problems caused by
    fragmentation, either by minimising fragmentation itself, or by good
    caching algorithms, or (in fact) both.

    --tim


  10. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    [Tim Bradshaw]:
    >
    > No one is claiming that these filesystems never fragment files:
    > it's obvious that they must do so. What is being argued is that
    > these filesytems do rather a good job of minimising problems
    > caused by fragmentation, either by minimising fragmentation
    > itself, or by good caching algorithms, or (in fact) both.


    many Unix applications will not edit existing files, but rather make a
    new copy which replaces the original. this will essentially
    defragment the file if the file system isn't too full.

    --
    Kjetil T.

  11. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    Kjetil Torgrim Homme wrote:
    > [Tim Bradshaw]:
    >> No one is claiming that these filesystems never fragment files:
    >> it's obvious that they must do so. What is being argued is that
    >> these filesytems do rather a good job of minimising problems
    >> caused by fragmentation, either by minimising fragmentation
    >> itself, or by good caching algorithms, or (in fact) both.

    >
    > many Unix applications will not edit existing files, but rather make a
    > new copy which replaces the original. this will essentially
    > defragment the file if the file system isn't too full.
    >

    and ruin any hard links you have to the original :-)

  12. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    Kjetil Torgrim Homme wrote:
    >[Tim Bradshaw]:
    >>
    >> No one is claiming that these filesystems never fragment files:
    >> it's obvious that they must do so. What is being argued is that
    >> these filesytems do rather a good job of minimising problems
    >> caused by fragmentation, either by minimising fragmentation
    >> itself, or by good caching algorithms, or (in fact) both.

    >
    >many Unix applications will not edit existing files, but rather make a
    >new copy which replaces the original. this will essentially
    >defragment the file if the file system isn't too full.


    Unix does not differ from other OS's in that manner.

    --
    Floyd L. Davidson
    Ukpeagvik (Barrow, Alaska) floyd@apaflo.com

  13. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    * I don't recall
    * ever having problems caused by fragmentation.

    ** They don't?? Have you ever tried to figure out exactly where your data
    ** is stored?

    fragmentation's not a problem, it's not that it doesn't exist.

    After reading the Fast Filesystem paper 20 years ago, I remember thinking,
    hmmm - they come pre-fragmented... perhaps it's time to read it again ...

    -Mike

  14. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    In article <8763zgmst4.fld@apaflo.com>, floyd@apaflo.com
    (Floyd L. Davidson) writes:

    > Kjetil Torgrim Homme wrote:
    >
    >> [Tim Bradshaw]:
    >>>
    >>> No one is claiming that these filesystems never fragment files:
    >>> it's obvious that they must do so. What is being argued is that
    >>> these filesytems do rather a good job of minimising problems
    >>> caused by fragmentation, either by minimising fragmentation
    >>> itself, or by good caching algorithms, or (in fact) both.

    >>
    >> many Unix applications will not edit existing files, but rather make
    >> a new copy which replaces the original. this will essentially
    >> defragment the file if the file system isn't too full.

    >
    > Unix does not differ from other OS's in that manner.


    Except that, as I've learned the hard way, the Windows filesytem's
    quirks occasionally cause terrible things to happen during the
    delete/rename portion of the operation. Try googling for "file
    system tunneling" - but only if you have a strong stomach.

    --
    /~\ cgibbs@kltpzyxm.invalid (Charlie Gibbs)
    \ / I'm really at ac.dekanfrus if you read it the right way.
    X Top-posted messages will probably be ignored. See RFC1855.
    / \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!


  15. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    Richard Steiner wrote:

    > Here in comp.os.linux.misc,
    > Ramon F Herrera spake unto us, saying:
    >
    >>Well, the point is that in some 15 years being in charge of Unix
    >>(Solaris, IBM AIX, HP-UX, Linux, even friggin' SCO) I don't recall
    >>ever having problems caused by fragmentation.

    >
    > OS/2's HPFS filesystem (designed, ironically, by Gordon Letwin at
    > Microsoft) largely avoids fragmentation issues as well.
    >
    > I can't come up with a technical reason for Microsoft not wanting to
    > use more fragmentation-resistant filesystems.
    >
    > A lack of caring, perhaps. :-)
    >


    I'd rather a lot of stupidity...

    --

    Jerry McBride (jmcbride@mail-on.us)

  16. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    On Dec 2, 10:10 pm, fl...@apaflo.com (Floyd L. Davidson) wrote:
    > Kjetil Torgrim Homme wrote:
    >
    > >[Tim Bradshaw]:

    >
    > >> No one is claiming that these filesystems never fragment files:
    > >> it's obvious that they must do so. What is being argued is that
    > >> these filesytems do rather a good job of minimising problems
    > >> caused by fragmentation, either by minimising fragmentation
    > >> itself, or by good caching algorithms, or (in fact) both.

    >
    > >many Unix applications will not edit existing files, but rather make a
    > >new copy which replaces the original. this will essentially
    > >defragment the file if the file system isn't too full.

    >


    > Unix does not differ from other OS's in that manner.
    >


    Floyd:

    There are many features in which Windows does not differ from other
    OSs... in theory, that is.

    Let's use recursive copy as an example. Try to do a simple copy of a
    folder with many folders inside. Any somewhat wide and deep filesystem
    will do. Good luck. You will have to use the backup utility for those.

    Any other OS will handle such a simple copy operation flawlessly.

    -Ramon


  17. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    On Dec 3, 10:13 pm, Ramon F Herrera wrote:
    > On Dec 2, 10:10 pm, fl...@apaflo.com (Floyd L. Davidson) wrote:
    >
    >
    >
    >
    >
    > > Kjetil Torgrim Homme wrote:

    >
    > > >[Tim Bradshaw]:

    >
    > > >> No one is claiming that these filesystems never fragment files:
    > > >> it's obvious that they must do so. What is being argued is that
    > > >> these filesytems do rather a good job of minimising problems
    > > >> caused by fragmentation, either by minimising fragmentation
    > > >> itself, or by good caching algorithms, or (in fact) both.

    >
    > > >many Unix applications will not edit existing files, but rather make a
    > > >new copy which replaces the original. this will essentially
    > > >defragment the file if the file system isn't too full.

    >
    > > Unix does not differ from other OS's in that manner.

    >
    > Floyd:
    >
    > There are many features in which Windows does not differ from other
    > OSs... in theory, that is.
    >
    > Let's use recursive copy as an example. Try to do a simple copy of a
    > folder with many folders inside. Any somewhat wide and deep filesystem
    > will do. Good luck. You will have to use the backup utility for those.
    >
    > Any other OS will handle such a simple copy operation flawlessly.
    >
    > -Ramon- Hide quoted text -
    >
    > - Show quoted text -


    robocopy

  18. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    Ramon F Herrera wrote:
    >On Dec 2, 10:10 pm, fl...@apaflo.com (Floyd L. Davidson) wrote:
    >> Kjetil Torgrim Homme wrote:
    >>
    >> >[Tim Bradshaw]:

    >>
    >> >> No one is claiming that these filesystems never fragment files:
    >> >> it's obvious that they must do so. What is being argued is that
    >> >> these filesytems do rather a good job of minimising problems
    >> >> caused by fragmentation, either by minimising fragmentation
    >> >> itself, or by good caching algorithms, or (in fact) both.

    >>
    >> >many Unix applications will not edit existing files, but rather make a
    >> >new copy which replaces the original. this will essentially
    >> >defragment the file if the file system isn't too full.

    >>

    >
    >> Unix does not differ from other OS's in that manner.
    >>

    >
    >Floyd:
    >
    >There are many features in which Windows does not differ from other
    >OSs... in theory, that is.


    What has that got to do with the statements above?

    >Let's use recursive copy as an example. Try to do a simple copy of a
    >folder with many folders inside. Any somewhat wide and deep filesystem
    >will do. Good luck. You will have to use the backup utility for those.
    >
    >Any other OS will handle such a simple copy operation flawlessly.


    What has that got to do with the statements above?

    --
    Floyd L. Davidson
    Ukpeagvik (Barrow, Alaska) floyd@apaflo.com

  19. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    On Dec 4, 12:19 am, fl...@apaflo.com (Floyd L. Davidson) wrote:
    > Ramon F Herrera wrote:
    >
    >
    >
    > >On Dec 2, 10:10 pm, fl...@apaflo.com (Floyd L. Davidson) wrote:
    > >> Kjetil Torgrim Homme wrote:

    >
    > >> >[Tim Bradshaw]:

    >
    > >> >> No one is claiming that these filesystems never fragment files:
    > >> >> it's obvious that they must do so. What is being argued is that
    > >> >> these filesytems do rather a good job of minimising problems
    > >> >> caused by fragmentation, either by minimising fragmentation
    > >> >> itself, or by good caching algorithms, or (in fact) both.

    >
    > >> >many Unix applications will not edit existing files, but rather make a
    > >> >new copy which replaces the original. this will essentially
    > >> >defragment the file if the file system isn't too full.

    >
    > >> Unix does not differ from other OS's in that manner.

    >
    > >Floyd:

    >
    > >There are many features in which Windows does not differ from other
    > >OSs... in theory, that is.

    >
    > What has that got to do with the statements above?
    >
    > >Let's use recursive copy as an example. Try to do a simple copy of a
    > >folder with many folders inside. Any somewhat wide and deep filesystem
    > >will do. Good luck. You will have to use the backup utility for those.

    >
    > >Any other OS will handle such a simple copy operation flawlessly.

    >
    > What has that got to do with the statements above?
    >



    They are both instances of a class of syntactically correct English
    language sentences that contain comparisons between operating systems
    features, and their actual implementations.

    I mean, I could write a chapter on the many relationships between what
    you wrote and what I did.

    Some people define intelligence as the ability to find relationships,
    btw.

    -RFH


  20. Re: How come Linux/Unix filesystems don't seem to get fragmented?

    Ramon F Herrera wrote:
    >
    >Some people define intelligence as the ability to find relationships,
    >btw.


    So the inability to relate your posts to the topic of
    discussion, the article you reply to, or anything else
    with a relationship to the newsgroups you posted in, is
    only because you lack intelligence. That's good, cause
    I was about to think there were other things wrong with
    you.


    --
    Floyd L. Davidson
    Ukpeagvik (Barrow, Alaska) floyd@apaflo.com

+ Reply to Thread
Page 1 of 2 1 2 LastLast