.DIR performance with many versions vs. many unique names ? - VMS

This is a discussion on .DIR performance with many versions vs. many unique names ? - VMS ; Hi. Just a quick question... Would there be any difference in performance (lookup, delete and so on) on a directory when using unique names vs. having fewer names but multiple versions ? That is, having the same total number of ...

+ Reply to Thread
Results 1 to 3 of 3

Thread: .DIR performance with many versions vs. many unique names ?

  1. .DIR performance with many versions vs. many unique names ?

    Hi.
    Just a quick question...

    Would there be any difference in performance
    (lookup, delete and so on) on a directory
    when using unique names vs. having fewer names
    but multiple versions ? That is, having the same
    total number of files in both cases.

    The actual case is where we want to save batch logs
    for some time, and I would like to give them unique
    names (adding a timestamp) instead of just having a
    lot of old "versions". The issue is of course having
    versions rolling up to the max version...

    Maybe having multiple versions of the same file is
    stored in less space in the .DIR file then having
    unique names !?

    Jan-Erik.

  2. Re: .DIR performance with many versions vs. many unique names ?

    On Nov 6, 4:42*pm, Jan-Erik Söderholm
    wrote:

    > Maybe having multiple versions of the same file is
    > stored in less space in the .DIR file then having
    > unique names !?


    Correct. Each additional version takes just 8 bytes (untill a block is
    full)
    Each new directory entry takes 4 + filename + 8.

    Just create a dierctory with a few examples and DUMP/DIR xxx.DIR

    Also... if you can add a 'proper' date stams in always ascending order
    (YYYYMMDD or even YYMMDD in this case.)
    That way it will be easier to add room if needed.

    Hein.



  3. Re: .DIR performance with many versions vs. many unique names ?

    Hein RMS van den Heuvel wrote:
    > On Nov 6, 4:42 pm, Jan-Erik Söderholm
    > wrote:
    >
    >> Maybe having multiple versions of the same file is
    >> stored in less space in the .DIR file then having
    >> unique names !?

    >
    > Correct. Each additional version takes just 8 bytes (untill a block is
    > full)
    > Each new directory entry takes 4 + filename + 8.
    >


    OK, so with a specific number of files, unique names will
    create a larger DIR file.

    > Just create a dierctory with a few examples and DUMP/DIR xxx.DIR
    >
    > Also... if you can add a 'proper' date stams in always ascending order
    > (YYYYMMDD or even YYMMDD in this case.)


    Right, that's a bonus. The last files created are those displayed
    last on screen after a DIR... :-)

    > That way it will be easier to add room if needed.


    OK, becuse it's faster to create new files at the end
    of the DIR, right ? And what about deletes ? The plan
    is to run a delete/before= regulary, but those
    files deleted will be at the start/top of the dir. That
    might cause some extra I/O's maybe (?)

    The fact is that I have another systems where all files are
    created with timestamps, and I have hard time to get the
    "Dir Data (Hit %)" in MONI FILE getting over even 5% on
    that systems (DS20, 8.2, standard blue storageworks 9 GB
    disks). I've tried to set ACP_DIRCACHE to 10.000 or
    20.000 boocks, but it doesn't help.
    So, actualy, I am also looking for a way of getting
    this other system running with higher Dir Data Hit
    rates.

    On the system I'm looking at now, we have either 100%
    Dir Data Hit's (or zero when there are no attemts). I
    do not want to make that go the same way with it's
    I/O. B.t.w, this is also a DS20, but using HZG80's
    for the disk system.

    Could the DIR data caching has anything to do with
    the number of *unique* file names (and not only the
    *total* number of files) ?

    Jan-Erik.


    >
    > Hein.
    >
    >


+ Reply to Thread