Most Efficient Directory Structure? - Setup

This is a discussion on Most Efficient Directory Structure? - Setup ; So, my new web server will be set-up soon and I'm going to be running a suite of applications that will let people upload files. The files will be have a description stored in a database and be given a ...

+ Reply to Thread
Results 1 to 11 of 11

Thread: Most Efficient Directory Structure?

  1. Most Efficient Directory Structure?

    So, my new web server will be set-up soon and I'm going to be running a
    suite of applications that will let people upload files. The files
    will be have a description stored in a database and be given a random
    filename equal to their unique ID in the database before they're
    stored.

    With the filename being a random sequence of letters and numbers, the
    maximum number of files I can store is limited to 36 ^ filename length.
    At just 8 chars, it's over 2 trillion. I'm not worried there.

    So I'm trying to figure out how I set up the directory structure for
    maximum expandability and maximum efficiency. If my new site is
    unsuccessful, it won't matter. But if it is... I could eventually have
    millions of files and I'd hate to be recoding everything to use a new
    directory structure, then trying to shift all those files around when I
    already had tens or hundreds of thousands of files stored.

    What I'm trying to figure out is:

    A: How to design so it can easily be spread over multiple disks. Seems
    subdirectories based on the initial letters of the filenames would
    work. For example, /home/myusername/stuff/A, /home/myusername/stuff/B,
    and /home/myusername/stuff/1, could each be mounted to a different
    physical drive in an external array, giving me 36 * average drive size
    expandability. And because the filenames are randomly generated, the
    distribution should be fairly even.

    B: How to design for fastest reads and writes. What I'm wondering is
    where the tipping point is in terms of number of files or subdirs
    degrading write/read/seek speed? Am I better off putting 360,000 files
    in one directory, or subdividing that into 36 subdirectories of 10,000
    files each, or perhaps subdividing further into 1296 subdirectories (36
    subdirectories with 36 subdirectories within them) of 278 files each?

    Technically, with an 8-character filename, I have over two trillion
    filenames available and could have over two trillion subdirectories,
    each 11 levels deep (for example...
    /home/myusername/stuff/1/T/6/Y/B/0/P/R/1T6YB0PR.file). But since I'd
    have to do a series of mkdir commands each time I stored a file, that
    could slow performance. Plus deep in my long-term-storage there's a
    memory of a server jockey telling me that each subdirectory requires a
    separate disk read which slows performance and wears the drive out
    faster.

    So where's the sweet spot in keeping the directory structure as simple
    as possible while keeping the best speed on writes, reads, and seeks?

    Thanks for any input.

    - Greg


  2. Re: Most Efficient Directory Structure?

    Why not store files themselves in a database? Then you do not need to
    worry about too many files. You may worry about backup though.

    On Nov 7, 12:03 pm, "Doesn't Work At McDonalds"
    wrote:
    > So, my new web server will be set-up soon and I'm going to be running a
    > suite of applications that will let people upload files. The files
    > will be have a description stored in a database and be given a random
    > filename equal to their unique ID in the database before they're
    > stored.
    >

    [snip]


  3. Re: Most Efficient Directory Structure?

    In article <1162901006.244376.74130@h54g2000cwb.googlegroups.c om>,
    Doesn't Work At McDonalds wrote:
    >
    > So I'm trying to figure out how I set up the directory structure for
    > maximum expandability and maximum efficiency.


    What was that line about "a man cannot serve two masters"...

    > B: How to design for fastest reads and writes. What I'm wondering is
    > where the tipping point is in terms of number of files or subdirs
    > degrading write/read/seek speed? Am I better off putting 360,000 files
    > in one directory, or subdividing that into 36 subdirectories of 10,000
    > files each, or perhaps subdividing further into 1296 subdirectories (36
    > subdirectories with 36 subdirectories within them) of 278 files each?


    You may have to change filesystems as time goes on. What's the expected
    size on a file? Does the access date matter?

    > Technically, with an 8-character filename, I have over two trillion
    > filenames available and could have over two trillion subdirectories,
    > each 11 levels deep (for example...
    > /home/myusername/stuff/1/T/6/Y/B/0/P/R/1T6YB0PR.file). But since I'd
    > have to do a series of mkdir commands each time I stored a file, that
    > could slow performance.


    Might a "if not successful (readdir) then mkdir" be faster?

    > Plus deep in my long-term-storage there's a
    > memory of a server jockey telling me that each subdirectory requires a
    > separate disk read which slows performance and wears the drive out
    > faster.


    The directory will most likely be cached, so it may slow performance
    (not much, as it's in RAM), but won't hit the disk.

    > So where's the sweet spot in keeping the directory structure as simple
    > as possible while keeping the best speed on writes, reads, and seeks?


    You need to say what the expected mix of those are, before any decent
    recommendations can be offered.

    --
    -eben QebWenE01R@vTerYizUonI.nOetP royalty.no-ip.org:81

    And we never failed to fail / It was the easiest thing to do -- CSN


  4. Re: Most Efficient Directory Structure?

    H.Xu wrote:
    > Why not store files themselves in a database? Then you do not need to
    > worry about too many files. You may worry about backup though.


    First, it's an automatic performance hit. Retrieving the file from the
    database and serving it programatically requires more CPU and disk
    activity than directly linking to the file. That means slower serving
    and more wear and tear on the server.

    Second, I've had a very large database and the larger it got, the more
    maintenance had to be done on it to keep things flowing smoothly.

    Third, IIRC, most filesystems have a maximum size for a single file and
    the database "data" is kept in one file per table. So, if that limit
    is 4 gigs, then I'd have to create a new table each time I started
    approaching the 4 gig limit. On a 500gb drive with 400 gigs allocated
    to files, that would require distributing the files across 100 tables.
    The mechanisms needed to manage that would create additional overhead
    to slow things down.


  5. Re: Most Efficient Directory Structure?

    Hactar wrote:
    > > So I'm trying to figure out how I set up the directory structure for
    > > maximum expandability and maximum efficiency.

    >
    > What was that line about "a man cannot serve two masters"...


    Heh. :-)

    > You may have to change filesystems as time goes on. What's the expected
    > size on a file? Does the access date matter?


    Ranges from about 8k to 800k. Access date is unimportant.

    > Might a "if not successful (readdir) then mkdir" be faster?


    Definite possibility.

    > > So where's the sweet spot in keeping the directory structure as simple
    > > as possible while keeping the best speed on writes, reads, and seeks?

    >
    > You need to say what the expected mix of those are, before any decent
    > recommendations can be offered.


    Write-occasionally, read and seek often.

    - Greg


  6. Re: Most Efficient Directory Structure?

    In article <1162913330.391245.306810@h48g2000cwc.googlegroups. com>,
    Doesn't Work At McDonalds wrote:
    > Hactar wrote:
    >
    > > You may have to change filesystems as time goes on. What's the expected
    > > size on a file? Does the access date matter?

    >
    > Ranges from about 8k to 800k. Access date is unimportant.


    OK, not sure of the layout or filesystem, but an easy gain is to mount
    it "-o noatime". If the files were generally small, I'd say to follow
    the recommendations for a news spool.

    > > > So where's the sweet spot in keeping the directory structure as simple
    > > > as possible while keeping the best speed on writes, reads, and seeks?

    > >
    > > You need to say what the expected mix of those are, before any decent
    > > recommendations can be offered.

    >
    > Write-occasionally, read and seek often.


    Then you'll see bigger gains by improving read/seek performance at the
    expense of write performance (to an extent).

    --
    "The Web brings people together because no matter what kind of a
    twisted sexual mutant you happen to be, you've got millions of pals
    out there. Type in 'Find people that have sex with goats that are on
    fire' and the computer will say, 'Specify type of goat.'" -- Rich Jeni

  7. Re: Most Efficient Directory Structure?

    On 7 Nov 2006, in the Usenet newsgroup comp.os.linux.setup, in article
    <1162901006.244376.74130@h54g2000cwb.googlegroups.c om>,
    Doesn't Work At McDonalds wrote:

    >So, my new web server will be set-up soon and I'm going to be running a
    >suite of applications that will let people upload files.


    Hope you are filtering lest you become a WareZ server

    >With the filename being a random sequence of letters and numbers, the
    >maximum number of files I can store is limited to 36 ^ filename length.
    >At just 8 chars, it's over 2 trillion. I'm not worried there.


    Yeah, in theory your file system would run out of space long before that.

    >So I'm trying to figure out how I set up the directory structure for
    >maximum expandability and maximum efficiency. If my new site is
    >unsuccessful, it won't matter. But if it is... I could eventually have
    >millions of files and I'd hate to be recoding everything to use a new
    >directory structure, then trying to shift all those files around when I
    >already had tens or hundreds of thousands of files stored.


    Look at the way a news server is set up. Lots and Lots of directories.

    >A: How to design so it can easily be spread over multiple disks. Seems
    >subdirectories based on the initial letters of the filenames would
    >work.


    Yes, but there are other problems. You are posting from a search engine -
    did you try looking there first? For a rather interesting starting point,
    get to the 'advanced search' section of google groups, and look in the
    'comp.os.linux.*' hierarchy, for the _Subject:_

    9Gbyte SCSI disc as boot disc with 1106 cylinders
    All 16 messages in topic - view as tree
    From: Peter Loibl - view profile
    Date: Wed, Oct 22 1997 12:00 am
    Email: Peter Loibl
    Groups: comp.os.linux.hardware, comp.os.linux.development.system,
    comp.os.linux.help, comp.os.linux.setup

    It's a fairly long thread - read especially the stuff from Stephen Tweedie.
    That will suggest other subjects to investigate. But it is doable,
    because Google is already doing so using Linux.

    Old guy

  8. Re: Most Efficient Directory Structure?

    Moe Trin wrote:
    > >So, my new web server will be set-up soon and I'm going to be running a
    > >suite of applications that will let people upload files.

    >
    > Hope you are filtering lest you become a WareZ server


    No way to upload WareZ with the mechanism offered.

    > Look at the way a news server is set up. Lots and Lots of directories.


    Thanks. A pointer in a good direction.

    > For a rather interesting starting point,
    > get to the 'advanced search' section of google groups, and look in the
    > 'comp.os.linux.*' hierarchy, for the _Subject:_
    >
    > 9Gbyte SCSI disc as boot disc with 1106 cylinders

    [snip]
    > It's a fairly long thread - read especially the stuff from Stephen Tweedie.
    > That will suggest other subjects to investigate. But it is doable,
    > because Google is already doing so using Linux.


    Thanks, another good pointer.

    - Greg


  9. Re: Most Efficient Directory Structure?

    "Doesn't Work At McDonalds" said:
    >So, my new web server will be set-up soon and I'm going to be running a
    >suite of applications that will let people upload files. The files
    >will be have a description stored in a database and be given a random
    >filename equal to their unique ID in the database before they're
    >stored.
    >
    >With the filename being a random sequence of letters and numbers, the
    >maximum number of files I can store is limited to 36 ^ filename length.
    > At just 8 chars, it's over 2 trillion. I'm not worried there.
    >
    >So I'm trying to figure out how I set up the directory structure for
    >maximum expandability and maximum efficiency. If my new site is
    >unsuccessful, it won't matter. But if it is... I could eventually have
    >millions of files and I'd hate to be recoding everything to use a new
    >directory structure, then trying to shift all those files around when I
    >already had tens or hundreds of thousands of files stored.
    >
    >What I'm trying to figure out is:
    >
    >A: How to design so it can easily be spread over multiple disks. Seems
    >subdirectories based on the initial letters of the filenames would
    >work. For example, /home/myusername/stuff/A, /home/myusername/stuff/B,
    >and /home/myusername/stuff/1, could each be mounted to a different
    >physical drive in an external array, giving me 36 * average drive size
    >expandability. And because the filenames are randomly generated, the
    >distribution should be fairly even.


    Could use that -- but also, could exceed by far the limits of single disk
    by using a real disk subsystem (something that hides individual disks
    from the server, and just serves capacity - possibly even fault-tolerant
    capacity). These things also can handle load balancing across disks.
    Or perhaps a combination of thw two.

    >B: How to design for fastest reads and writes. What I'm wondering is
    >where the tipping point is in terms of number of files or subdirs
    >degrading write/read/seek speed?


    .... but later on you told that reads are the majority of accesses.

    >Am I better off putting 360,000 files in one directory, or subdividing
    >that into 36 subdirectories of 10,000 files each, or perhaps subdividing
    >further into 1296 subdirectories (36 subdirectories with 36 subdirectories
    >within them) of 278 files each?


    You might look at how the web caches, such as Squid, are doing this.

    >Technically, with an 8-character filename, I have over two trillion
    >filenames available and could have over two trillion subdirectories,
    >each 11 levels deep (for example...
    >/home/myusername/stuff/1/T/6/Y/B/0/P/R/1T6YB0PR.file). But since I'd
    >have to do a series of mkdir commands each time I stored a file, that
    >could slow performance.


    .... but writes were just a fraction of the load, right?

    Anyway, that example is by far overdoing the directory hashing - esp.
    the last two directory components: you'd have a directory with 36
    subdirectories, each with just a single file. What Squid does (from
    memory), is something like
    ..../1T/6Y/B0/1T6YB0PR.file

    .... wham! Massive reduction in directory names - and still keeping
    sizes of single directories in sane limits (up to 36*36 elements).
    Also, your worry of directory creation speed largely goes away, as
    there'd be at most three levels to create.

    One thing you could do: measure. Write a proggy to create a directory
    structure, and a proggy to randomly read it (run several in parallel),
    and time the execution.

    Then change the structure, and measure again, with similar load.
    --
    Wolf a.k.a. Juha Laiho Espoo, Finland
    (GC 3.0) GIT d- s+: a C++ ULSH++++$ P++@ L+++ E- W+$@ N++ !K w !O !M V
    PS(+) PE Y+ PGP(+) t- 5 !X R !tv b+ !DI D G e+ h---- r+++ y++++
    "...cancel my subscription to the resurrection!" (Jim Morrison)

  10. Re: Most Efficient Directory Structure?

    Doesn't Work At McDonalds wrote:
    > H.Xu wrote:


    >>Why not store files themselves in a database? Then you do not need to
    >>worry about too many files. You may worry about backup though.


    > First, it's an automatic performance hit. Retrieving the file from the
    > database and serving it programatically requires more CPU and disk
    > activity than directly linking to the file. That means slower serving
    > and more wear and tear on the server.


    While I think I agree with that it is not much of a hit. If that hit is
    significant you should have more than one server.

    > Second, I've had a very large database and the larger it got, the more
    > maintenance had to be done on it to keep things flowing smoothly.


    Of course always a consideration and worse with more servers.

    > Third, IIRC, most filesystems have a maximum size for a single file and
    > the database "data" is kept in one file per table. So, if that limit
    > is 4 gigs, then I'd have to create a new table each time I started
    > approaching the 4 gig limit. On a 500gb drive with 400 gigs allocated
    > to files, that would require distributing the files across 100 tables.
    > The mechanisms needed to manage that would create additional overhead
    > to slow things down.


    Can one ask just what kinds of files you intend to serve with the huge numbers
    you are talking about? It sounds like DVDs. Are you doing worst case planning or
    are these credible requirements? It is always possible to come up with an even
    worse worst case.

    I would be surprised if you can come up with a "best" solution for files from 2
    bytes to 2GB. If you can, publish. If you have a usual file size and expect a
    few outlyers then set up two file systems even if on different drives on a
    single server.

    --
    Test question: Israel's war on Lebanon lasted 34 days. From the beginning
    Israel said 33. In the middle of September 2006, Israel has the length down
    to 30 days. How long will it take to get down to six days?
    -- The Iron Webmaster, 3711
    nizkor http://www.giwersworld.org/nizkook/nizkook.phtml
    Iraqi democracy http://www.giwersworld.org/911/armless.phtml a3

  11. Re: Most Efficient Directory Structure?

    Doesn't Work At McDonalds wrote:

    > A: How to design so it can easily be spread over multiple disks.**Seems
    > subdirectories based on the initial letters of the filenames would
    > work. For example, /home/myusername/stuff/A, /home/myusername/stuff/B,
    > and /home/myusername/stuff/1, could each be mounted to a different
    > physical drive in an external array, giving me 36 * average drive size
    > expandability.**And*because*the*filenames*are*rand omly*generated,*the
    > distribution should be fairly even.


    Some variant of extendible hashing might do the trick.

    --
    It's turtles, all the way down.

+ Reply to Thread