Linux file system overhead - Ubuntu

This is a discussion on Linux file system overhead - Ubuntu ; I'm fairly new to Linux and am using Ubuntu 8.04 to create several servers to hold my video and audio collection. In one of the servers I put 4 1 terabyte hard drives. I used the default of ext3 when ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 23

Thread: Linux file system overhead

  1. Linux file system overhead

    I'm fairly new to Linux and am using Ubuntu 8.04 to create several servers
    to hold my video and audio collection. In one of the servers I put 4 1
    terabyte hard drives. I used the default of ext3 when installing and now
    have a system capacity of 1376.3 Gb available. Windows would yield 931 Gb
    per drive or 3724 Gb. Seems that I'm losing quite a bit of space. Any
    ideas?


  2. Re: Linux file system overhead

    After takin' a swig o' grog, Jim belched out
    this bit o' wisdom:

    > I'm fairly new to Linux and am using Ubuntu 8.04 to create several servers
    > to hold my video and audio collection. In one of the servers I put 4 1
    > terabyte hard drives. I used the default of ext3 when installing and now
    > have a system capacity of 1376.3 Gb available. Windows would yield 931 Gb
    > per drive or 3724 Gb. Seems that I'm losing quite a bit of space. Any
    > ideas?


    Start with "man tune2fs".

    You can also reformat with new settings.

    Also, if you have a lot of files, check out other file-systems such as
    reiserfs or xfs.

    (Disclaimer: it is highly probable you'll get a better answer later from
    someone else.)

    --
    This font is starting to come out very nicely
    Knghtbrd: oh dear, are you hacking up another quake font in vi?

  3. Re: Linux file system overhead

    On Fri, 31 Oct 2008 08:53:51 -0700, Jim wrote:

    > I'm fairly new to Linux and am using Ubuntu 8.04 to create several


    Server or Desktop edition?

    > servers to hold my video and audio collection. In one of the servers I
    > put 4 1 terabyte hard drives. I used the default of ext3 when
    > installing and now have a system capacity of 1376.3 Gb available.


    Does the exact same machine with Windows see all four hard drives? If it
    does, I have no explanation for the bulk of the missing space. That said,
    by default ext3 reserves 5% of the volume for the superuser. Also, ext3
    does not add and remove inodes automatically.

    > Windows would yield 931 Gb per drive or 3724 Gb. Seems that I'm losing
    > quite a bit of space. Any ideas?


    You'd probably be better off to use a different file system with a server.
    XFS and JFS are possibilities. Google, and even Wikkipedia are your
    friends.

    http://en.wikipedia.org/wiki/JFS_(file_system)

    http://en.wikipedia.org/wiki/XFS

    http://en.wikipedia.org/wiki/Comparison_of_file_systems

    http://www.debian-administration.org/articles/388

    --
    Tony Sivori
    Due to spam, I'm now filtering all Google Groups posters.

  4. Re: Linux file system overhead

    Jim schreef:
    > I'm fairly new to Linux and am using Ubuntu 8.04 to create several
    > servers to hold my video and audio collection. In one of the servers I
    > put 4 1 terabyte hard drives. I used the default of ext3 when
    > installing and now have a system capacity of 1376.3 Gb available.
    > Windows would yield 931 Gb per drive or 3724 Gb. Seems that I'm losing
    > quite a bit of space. Any ideas?


    You'll probably have to look at the used/free space per drive to figure
    out what's wrong. Because wrong it is.

  5. Re: Linux file system overhead

    On Fri, 31 Oct 2008 08:53:51 -0700, Jim wrote:

    > I'm fairly new to Linux and am using Ubuntu 8.04 to create several
    > servers to hold my video and audio collection. In one of the servers I
    > put 4 1 terabyte hard drives. I used the default of ext3 when
    > installing and now have a system capacity of 1376.3 Gb available.
    > Windows would yield 931 Gb per drive or 3724 Gb. Seems that I'm losing
    > quite a bit of space. Any ideas?


    What does 'df' show when you've mounted them?

  6. Re: Linux file system overhead

    On October 31, 2008 11:53, in alt.os.linux.ubuntu, Jim
    (handyjh@frontiernet.net) wrote:

    > I'm fairly new to Linux and am using Ubuntu 8.04 to create several servers
    > to hold my video and audio collection. In one of the servers I put 4 1
    > terabyte hard drives. I used the default of ext3 when installing and now
    > have a system capacity of 1376.3 Gb available. Windows would yield 931 Gb
    > per drive or 3724 Gb. Seems that I'm losing quite a bit of space. Any
    > ideas?


    As root, run dumpe2fs(8) ("man 8 dumpe2fs") and take a look at how you've
    set up your ext3 filesystems.

    Some things to note:
    - the ext3 filesystem journal takes up space on the filesystem.
    - some of the filesystem space is reserved for the superuser (usually
    to permit root to log on and repair a "full" filesystem)
    - the blocksize should be tuned to the average size of the files on the
    fs. Too large a blocksize and you waste space. Too small, and you run
    out of inodes.

    Both the size of the journal and the amount of space reserved for the
    superuser can be altered using tune2fs(8) ("man 8 tune2fs").

    The blocksize can be altered, but only by reformatting the filesystem. If
    you need to do that, back up the filesystem contents, mke2fs with the new
    blocksize, and then restore the filesystem contents from the backup.

    HTH
    --
    Lew Pitcher

    Master Codewright & JOAT-in-training | Registered Linux User #112576
    http://pitcher.digitalfreehold.ca/ | GPG public key available by request
    ---------- Slackware - Because I know what I'm doing. ------



  7. Re: Linux file system overhead

    On Fri, 31 Oct 2008 08:53:51 -0700
    "Jim" wrote:

    > I'm fairly new to Linux and am using Ubuntu 8.04 to create several
    > servers to hold my video and audio collection. In one of the servers
    > I put 4 1 terabyte hard drives. I used the default of ext3 when
    > installing and now have a system capacity of 1376.3 Gb available.
    > Windows would yield 931 Gb per drive or 3724 Gb. Seems that I'm
    > losing quite a bit of space. Any ideas?


    Please post the output of "df -h" and "sudo fdisk -l".

    --- Mike

    --
    My sigfile ran away and is on hiatus.
    http://www.trausch.us/


  8. Re: Linux file system overhead

    On Fri, 31 Oct 2008 12:56:20 -0400
    Tony Sivori wrote:

    > You'd probably be better off to use a different file system with a
    > server. XFS and JFS are possibilities. Google, and even Wikkipedia
    > are your friends.


    I've lost data to bugs in the XFS filesystem code resulting in kernel
    panics, so I have a strong distrust for that particular filesystem. I
    use ext3 for all my servers presently, and once ext4 has had time to
    see widespread usage and testing, I will probably move to that.

    I've never used JFS, but why use it over ext3? A better question might
    be "why not use ext3 on a server?"

    --- Mike

    --
    My sigfile ran away and is on hiatus.
    http://www.trausch.us/


  9. Re: Linux file system overhead

    On Fri, 31 Oct 2008 16:24:39 -0400, Michael B. Trausch wrote:

    > On Fri, 31 Oct 2008 12:56:20 -0400
    > Tony Sivori wrote:
    >
    >> You'd probably be better off to use a different file system with a
    >> server. XFS and JFS are possibilities. Google, and even Wikkipedia are
    >> your friends.

    >
    > I've lost data to bugs in the XFS filesystem code resulting in kernel
    > panics, so I have a strong distrust for that particular filesystem. I
    > use ext3 for all my servers presently, and once ext4 has had time to see
    > widespread usage and testing, I will probably move to that.
    >
    > I've never used JFS, but why use it over ext3? A better question might
    > be "why not use ext3 on a server?"


    I'm no expert. But given that ext3 wasn't performing well for the OP, it
    seemed to me that after 15 minutes with Google, JFS and XFS were the best
    alternate choices. I left out ReiserFS since it is not likely to be
    further developed.

    Assuming the info in the links I posted is correct, comparing ext3, JFS
    and XFS: Ext3 results in the smallest post format disk capacity. Ext3 has
    the smallest maximum file size limit. Ext3 has the smallest maximum volume
    limit. Ext3 does not significantly outperform JFS or XFS in read or write
    speeds, although performance does vary depending on the size of the file
    in question.

    --
    Tony Sivori
    Due to spam, I'm now filtering all Google Groups posters.

  10. Re: Linux file system overhead

    On 2008-10-31, Jim wrote:
    > I'm fairly new to Linux and am using Ubuntu 8.04 to create several servers
    > to hold my video and audio collection. In one of the servers I put 4 1
    > terabyte hard drives. I used the default of ext3 when installing and now
    > have a system capacity of 1376.3 Gb available. Windows would yield 931 Gb
    > per drive or 3724 Gb. Seems that I'm losing quite a bit of space. Any
    > ideas?
    >


    You must have done something wrong, I have a server with a 5 TB ext3
    partition (LVM over RAID 6).

    --
    Due to extreme spam originating from Google Groups, and their inattention
    to spammers, I and many others block all articles originating
    from Google Groups. If you want your postings to be seen by
    more readers you will need to find a different means of
    posting on Usenet.
    http://improve-usenet.org/

  11. Re: Linux file system overhead

    Tony Sivori wrote:

    > I'm no expert. But given that ext3 wasn't performing well for the OP, it
    > seemed to me that after 15 minutes with Google, JFS and XFS were the best
    > alternate choices. I left out ReiserFS since it is not likely to be
    > further developed.



    Well, that certainly is an excellent reason, eh?

    It is already "developed," works fine, and is probably better than all
    of the others you mentioned.

    You might as well have stated the reiserfs is no good because the person
    it is named for is in prison for murder.


    --
    John

    No Microsoft, Apple, AT&T, Intel, Novell, Trend Micro, nor Ford products were used in the preparation or transmission of this message.

    The EULA sounds like it was written by a team of lawyers who want to tell me what I can't do. The GPL sounds like it was written by a human being, who wants me to know what I can do.

  12. Re: Linux file system overhead

    On Fri, 31 Oct 2008 23:52:28 -0400, Tony Sivori wrote:

    > I'm no expert. But given that ext3 wasn't performing well for the OP, it
    > seemed to me that after 15 minutes with Google, JFS and XFS were the
    > best alternate choices. I left out ReiserFS since it is not likely to be
    > further developed.


    I hear the original author of ReiserFS has plenty of free time to develop
    and maintain it. The problem is he only sends out new versions of the
    source code via snail mail.

    stonerfish

  13. Re: Linux file system overhead


    To the OP of this thread: Read the _very_last_ paragraph for what you
    need to do to provide us information to help you; read the rest if
    you're interested in the all the gory details.

    On Fri, 31 Oct 2008 23:52:28 -0400
    Tony Sivori wrote:

    > On Fri, 31 Oct 2008 16:24:39 -0400, Michael B. Trausch wrote:
    >
    > >
    > > I've lost data to bugs in the XFS filesystem code resulting in
    > > kernel panics, so I have a strong distrust for that particular
    > > filesystem. I use ext3 for all my servers presently, and once ext4
    > > has had time to see widespread usage and testing, I will probably
    > > move to that.
    > >
    > > I've never used JFS, but why use it over ext3? A better question
    > > might be "why not use ext3 on a server?"

    >
    > I'm no expert. But given that ext3 wasn't performing well for the OP,
    > it seemed to me that after 15 minutes with Google, JFS and XFS were
    > the best alternate choices. I left out ReiserFS since it is not
    > likely to be further developed.
    >


    Indeed on that last bit.

    >
    > Assuming the info in the links I posted is correct, comparing ext3,
    > JFS and XFS: Ext3 results in the smallest post format disk capacity.
    > Ext3 has the smallest maximum file size limit. Ext3 has the smallest
    > maximum volume limit. Ext3 does not significantly outperform JFS or
    > XFS in read or write speeds, although performance does vary depending
    > on the size of the file in question.
    >


    A few notes: I use size qualifiers exactly; KB/MB/GB/TB = metric
    measures (power-of-ten), KiB/MiB/GiB/TiB = CS measures (power-of-two).
    Read carefully.[3] Also, sorry about the length of this post and the
    grossly technical nature of it; if technical details bore ye, feel free
    to move along. ;-)

    Alright, then. Being that the max volume size for ext3 is 16 TiB (on
    platforms where Linux uses a 4 KiB page size)[2] and the max file size
    is 2 TiB, it's pretty much useful for nearly all but the largest types
    of data storage configurations. The OP was mentioning that they were
    trying to use 4 TB, which is well within that range. I am assuming
    that they're using these 4 TB as a single logical partition spread
    across the drives.

    In order to see what the filesystem overhead is for a 4 TB drive, one
    can use the qemu-img utility to create a sparse file that has a size
    listed of 4,000,000,000,000 bytes and initially uses 0 bytes on-disk
    (aside from its metadata, which is irrelevant to this example). (Note
    that this will only work on an ext4 filesystem if you are using the ext
    family; I am running on ext4, so I'll show by example.) I am using TB
    in its literal metric size, as hard drives are not sold by power-of-two
    measurements and have not been for a very long time. Size isn't
    exact, so the numbers here will be "in the ballpark".

    Also, note that 'du -B 1 FILE" will show the size on-disk of a
    sparse file such as the one being used here; you can see the
    *actual* filesystem overhead this way.

    If you want to retrace my steps, verify my math and observations, and
    so forth you need a filesystem that supports file sizes > 2 TiB. While
    the file will _actually_ never consume more than 31.7 billion bytes,
    the filesystem still has to be able to record a ~3.7 TiB/4 TB file
    size. ext4 can do this; ext3 cannot, alas. So, run 2.6.28 or use
    ext4dev in Intrepid on a partition that is 40GB or so to create this
    file. To see the file's actual storage on-disk, use 'du -B 1
    test-4tb.lmg' at the shell.

    So, to find the default parameters, we run:

    $ qemu-img create -f raw test-4gb.img 3906250000
    (this creates the sparse file, 4 TB in size)
    $ mkfs.ext3 -I 128 ./test-4gb.img

    This shows us:

    mke2fs 1.41.3 (12-Oct-2008)
    test-4tb.img is not a block special device.
    Proceed anyway? (y,n) y
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    244146176 inodes, 976562500 blocks
    48828125 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=4294967296
    29803 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
    2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616,
    78675968, 102400000, 214990848, 512000000, 550731776, 644972544

    (And the format will take a long time.) Now, let's run some
    calculations. First, we know that an inode has a default size of 256
    bytes on modern distributions. The OP is using Ubuntu Hardy, which has
    e2fsprogs 1.40.8, with a modification by Ubuntu to make the default
    inode size 128 bytes. So, unless the OP tells mkfs.ext3 to use 256
    byte inodes, they will be 128 bytes in size, which is important to our
    calculations.

    So, there are 8,192 inodes (128 bytes each) for each block group, and
    there are 29,803 block groups on disk. This comes out to
    31,250,710,528 bytes of space which will be taken up by the inodes
    (31.25 GB and change). This will support a maximum of 244,146,176
    files on the volume. Whether or not this is a reasonable number of
    files can only be determined by the OP, so I won't speculate there.
    FYI, an inode contains the type of the file, permission information,
    file size, and so forth. One file, one inode. So, on this 4 TB
    volume, we can have a little over 244.1 million files, for an average
    file size of 16,383.6 bytes per file. If you suspect that your
    average filesize is going to be larger, and want to trim the overhead
    of inode storage, you can decrease the number of inodes on your volume
    by specifying a usage type (-T option to mkfs.ext3), by specifying the
    bytes-per-inode (-i option to mkfs.ext3), or by specifying the number
    of inodes directly (-N option to mkfs.ext3). Not surprisingly, the
    default bytes-per-inode is 16,364 (you can find the default
    in /etc/mke2fs.conf and adjust it for your system if need be).

    Now, the next thing to look at which will deduct space from our
    filesystem is the space reserved on-disk for the root user (or whatever
    user/group is specified in the filesystem superblock). As can be seen
    above, 48,828,125 blocks are reserved by default; this is 5% of the
    disk space. Since the block size is 4,096 bytes, this means that the
    amount of reserved space on this filesystem is going to be
    200,000,000,000. This is 200 GB of space, which is *quite* a bit
    excessive; we only really probably need 100 MB or so to be reserved for
    when the disk is full, unless this filesystem is going to be holding
    system logs which are expected to grow at an alarming rate. I would
    suggest changing the reserved space to take up 25,600 blocks (say
    "tune2fs -r 25600 /dev/DEVICE" after creating the filesystem; mkfs.ext3
    doesn't let you specify the reserved space in blocks yet). You can
    disable it *completely* by specifying "-m 0" to mkfs.ext3, or "-r 0" to
    tune2fs on an existing filesystem if you judge that you do not need it
    at all.

    There is also a bit reserved for the "lost+found" directory that sits
    in the root directory, 32 blocks worth (or 131,072 bytes of disk
    space). This is used to create directory entries in lost+found and
    exists as a protection measure against an out-of-space condition when
    executing fsck on the partition; essentially the lost+found directory
    has its directory entries pre-allocated for it to an extent.

    In the default configuration, then, we've determined that there is a
    total of 231.25 GB and change used/reserved at FS creation time. We can
    reduce that to 31.25 GB and change very simply by just changing the
    reserved space to statistically near zero (but large enough to still be
    useful). There can be a bit more "invisible" usage depending on the
    sizes of the files on the drive for indirect blocks, which are
    pointers to additional data blocks for the file when the file grows
    beyond a certain size, but the space they take is negligible.

    The end result is that an empty filesystem, set up with the default
    options, for 4 TB, is going to yield (4000GB - 231.25 GB), or 3768.75
    GB. This is actually more space than the NTFS setup that the OP was
    talking about.

    Remember that filesystem we created in a sparse file above? Once it is
    done formatting, it consumes with the command we used above, it
    consumes 31,657,242,624 bytes on disk. This is slightly larger
    (406,532,096 bytes larger, exactly) than the figure I indicated above
    because it includes the journal (an extra 128 MiB), 21 copies of the
    superblock and the original superblock (22 instances in total). The
    data within the superblock takes 256 bytes[1], though it is stored in a
    block, which means that it actually is in a 4,096 byte space. 22
    copies, then, ties up 90,112 bytes of disk space. The remainder is
    metadata for the block groups over and above the inodes (roughly 387.70
    MiB - 128.00 MiB - 0.09 MiB, or ~259.61 MiB of additional overhead).

    When mounted, it looks like:
    $ df /mnt
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/loop0 3875472712 200836 3679959376 1% /mnt

    $ df -h /mnt
    Filesystem Size Used Avail Use% Mounted on
    /dev/loop0 3.7T 197M 3.5T 1% /mnt

    .... and after adjusting the reserved space to 25,600 blocks:
    $ sudo umount /mnt

    $ tune2fs -r 25600 ./test-4tb.img
    tune2fs 1.41.3 (12-Oct-2008)
    Setting reserved blocks count to 25600

    $ sudo mount ./test-4tb.img -o ro,loop /mnt
    $ df /mnt
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/loop0 3875472712 200836 3875169476 1% /mnt

    $ df -h /mnt
    Filesystem Size Used Avail Use% Mounted on
    /dev/loop0 3.7T 197M 3.7T 1% /mnt

    (For some reason, it did not update avail on the loop device; my /home
    directory sees changes immediately when I issue them when it is
    mounted, though; I am going to guess that loop devices don't see the
    changes right away or maybe the fact that I mounted the filesystem
    readonly means that it wasn't looking for the changes to be made.)

    The OP says that after formatting, though, only 1.3 TB is available out
    of a 4 TB file system? I can't calculate a way wherein there is 2.7 TB
    overhead in the on-disk format, and the default format should yield
    3.5 TiB (3,679,959,376 bytes/3.68 GB, give or take a few
    thousands/millions of bytes for minor differences in the partition/hard
    disk size)...

    So as I posted earlier in response to the OP, the output of "df -h" and
    "sudo fdisk -l" from the OP's system is necessary so that we can see
    some more details. Also, if possible, use dumpe2fs which will dump
    statistics about the filesystem to a text file which will be
    approximately 8 megs for a ~4 TB filesystem. Compress that file with
    gzip or bzip2, please, and upload it somewhere (and post a link) so
    that we can see what the overall picture of the filesystem is.

    --- Mike

    [1] ext3_fs.h, around line 423: http://tinyurl.com/5m43ha
    [2] http://lwn.net/Articles/187321/ (note, the FS does nowadays use an
    unsigned integer which is why it can do 16 TiB these days,
    ext3_fs.h says the __u32 type is used for blocks now)
    [3] http://tinyurl.com/6njv2c

    --
    My sigfile ran away and is on hiatus.
    http://www.trausch.us/


  14. Re: Linux file system overhead

    On Fri, 31 Oct 2008 23:29:36 -0500
    "John F. Morse" wrote:

    > Tony Sivori wrote:
    >
    > > I'm no expert. But given that ext3 wasn't performing well for the
    > > OP, it seemed to me that after 15 minutes with Google, JFS and XFS
    > > were the best alternate choices. I left out ReiserFS since it is
    > > not likely to be further developed.

    >
    > Well, that certainly is an excellent reason, eh?


    See the LKML for reference, but there have always been issues
    here-and-there re: ReiserFS 3. Hans, as the leader of Namesys, proved
    to be a very cruddy maintainer of the ReiserFS 3 filesystem, often
    failing to respond to bug reports and other quality issues in the
    code. This was one of the main reasons that ReiserFS 4 was never
    included in the mainline kernel.

    To be completely fair, if Hans would have played ball with the other
    kernel developers and not gotten angry or felt attacked when bugs were
    filed and issues were brought up, and had been more responsive in
    maintaining the filesystem, we'd probably have ReiserFS 4 as a quality
    filesystem. But, ReiserFS in general sees less exposure to real life
    stressful situations, and so there is implicitly less trust to be given
    to it. In _this_ case, "not likely to be further developed" is a very
    legitimate concern.

    > It is already "developed," works fine, and is probably better than
    > all of the others you mentioned.


    I've only had a few issues using ReiserFS 3 in the past. I tested
    ReiserFS 4 for a little while and experienced some data loss with it
    (admittedly, I am _hard_ on filesystems). But I wouldn't argue that
    either are better than ext3, which I have personally encountered no
    issues with. ReiserFS 3 is fine for personal, light usage if you back
    up your data regularly. I don't trust ReiserFS 4 at all, but also to
    be fair, it's been _quite_ some time (about 2 or 3 years) since I've
    bothered to patch a kernel to try to use it.

    > You might as well have stated the reiserfs is no good because the
    > person it is named for is in prison for murder.


    That would only be a valid argument if nobody stepped up to maintain
    ReiserFS in any capacity. Looking at the LKML, it doesn't appear as if
    it _is_ actively maintained, even the in-kernel version. I'd expect
    support for it in the mainline kernel to be dropped in the
    near-to-mid-distant future if that situation doesn't change.

    --- Mike

    --
    My sigfile ran away and is on hiatus.
    http://www.trausch.us/


  15. Re: Linux file system overhead

    Tony Sivori wrote:
    >

    .... snip ...
    >
    > --
    > Tony Sivori
    > Due to spam, I'm now filtering all Google Groups posters.


    Note that, for unknown reasons, Google doesn't accept posts to
    alt.os.linux.ubuntu.

    --
    [mail]: Chuck F (cbfalconer at maineline dot net)
    [page]:
    Try the download section.

  16. Re: Linux file system overhead

    jellybean stonerfish wrote:
    > Tony Sivori wrote:
    >
    >> I'm no expert. But given that ext3 wasn't performing well for
    >> the OP, it seemed to me that after 15 minutes with Google, JFS
    >> and XFS were the best alternate choices. I left out ReiserFS
    >> since it is not likely to be further developed.

    >
    > I hear the original author of ReiserFS has plenty of free time
    > to develop and maintain it. The problem is he only sends out
    > new versions of the source code via snail mail.


    I understand he is in jail for murder. This may be apocryphal.

    --
    [mail]: Chuck F (cbfalconer at maineline dot net)
    [page]:
    Try the download section.

  17. Re: Linux file system overhead

    On Sat, 01 Nov 2008 20:26:08 -0500, CBFalconer wrote:
    >
    > I understand he is in jail for murder. This may be apocryphal.


    http://www.sfgate.com/cgi-bin/articl...BAIQ12KT15.DTL

    --
    Tony Sivori
    Due to spam, I'm now filtering all Google Groups posters.

  18. Re: Linux file system overhead

    On Fri, 31 Oct 2008 23:29:36 -0500, John F. Morse wrote:

    > Tony Sivori wrote:
    >
    >> I'm no expert. But given that ext3 wasn't performing well for the OP,
    >> it seemed to me that after 15 minutes with Google, JFS and XFS were the
    >> best alternate choices. I left out ReiserFS since it is not likely to
    >> be further developed.

    >
    >
    > Well, that certainly is an excellent reason, eh?


    If you say so. I only considered it to be a fair reason.

    > It is already "developed," works fine, and is probably better than all
    > of the others you mentioned.


    I disagree that ReiserFS is superior to JFS and XFS.

    That said, I used ReiserFS for years after ext3 let me down after an
    abrupt power off. No complaints, except for the relatively slow mount
    time. Since my computer has 16 partitions, and most were mounted at boot
    time, ReiserFS did result in noticeably longer boot times.

    > You might as well have stated the reiserfs is no good because the person
    > it is named for is in prison for murder.


    One might reasonably shy away from ReiserFS for that very reason. It
    certainly is enough to give me mixed feelings about it. I certainly
    wouldn't use it if Hans were to personally profit from it.

    However, my concerns were more practical. No more bug fixes, no more
    patches, no telling when the next Kernel update might break it, no more
    development. That's reason enough, in my view, to not install ReiserFS.

    --
    Tony Sivori
    Due to spam, I'm now filtering all Google Groups posters.

  19. Re: Linux file system overhead

    On Sat, 01 Nov 2008 04:38:36 -0400, Michael B. Trausch wrote:

    > A few notes: I use size qualifiers exactly; KB/MB/GB/TB = metric
    > measures (power-of-ten), KiB/MiB/GiB/TiB = CS measures (power-of-two).
    > Read carefully.[3] Also, sorry about the length of this post and the
    > grossly technical nature of it; if technical details bore ye, feel free
    > to move along. ;-)


    It is starting to look academic since the OP apparently has either found a
    solution, given up, or is away from his usenet access.

    > If you want to retrace my steps, verify my math and observations,


    Oh, not really. I'm willing to take your word for it.

    > The OP says that after formatting, though, only 1.3 TB is available out
    > of a 4 TB file system?


    That is the part that I found puzzling too. Perhaps he may have been
    resizing partitions, and left a lot of space unformatted.

    --
    Tony Sivori
    Due to spam, I'm now filtering all Google Groups posters.

  20. Re: Linux file system overhead

    Tony Sivori wrote:
    > On Fri, 31 Oct 2008 23:29:36 -0500, John F. Morse wrote:
    >
    >
    >> Tony Sivori wrote:
    >>
    >>
    >>> I'm no expert. But given that ext3 wasn't performing well for the OP,
    >>> it seemed to me that after 15 minutes with Google, JFS and XFS were the
    >>> best alternate choices. I left out ReiserFS since it is not likely to
    >>> be further developed.
    >>>

    >> Well, that certainly is an excellent reason, eh?
    >>

    >
    > If you say so. I only considered it to be a fair reason.
    >
    >
    >> It is already "developed," works fine, and is probably better than all
    >> of the others you mentioned.
    >>

    >
    > I disagree that ReiserFS is superior to JFS and XFS.
    >
    > That said, I used ReiserFS for years after ext3 let me down after an
    > abrupt power off. No complaints, except for the relatively slow mount
    > time. Since my computer has 16 partitions, and most were mounted at boot
    > time, ReiserFS did result in noticeably longer boot times.



    I personally am not concerned about mount times (nor boot times).
    Servers here, with uptimes exceeding two years.

    How do you feel about the ReiserFS ability to handle many small files?

    And ReiserFS's superior speed?

    I do feel comfortable saying that ReiserFS, JFS and XFS are superior to
    ext3 (another Novell blunder, eh).


    --
    John

    No Microsoft, Apple, AT&T, Intel, Novell, Trend Micro, nor Ford products were used in the preparation or transmission of this message.

    The EULA sounds like it was written by a team of lawyers who want to tell me what I can't do. The GPL sounds like it was written by a human being, who wants me to know what I can do.

+ Reply to Thread
Page 1 of 2 1 2 LastLast