Stupidly large file created by dump(8) - BSD

This is a discussion on Stupidly large file created by dump(8) - BSD ; I have a machine which gets a level 0 dump(8) run periodically. In between (for various reasons) it gets a level 9 dump to a differently named file each day. So, the level 0 plus the latest level 9 will ...

+ Reply to Thread
Results 1 to 4 of 4

Thread: Stupidly large file created by dump(8)

  1. Stupidly large file created by dump(8)

    I have a machine which gets a level 0 dump(8) run periodically. In
    between (for various reasons) it gets a level 9 dump to a differently
    named file each day. So, the level 0 plus the latest level 9 will be
    sufficient to restore.

    I have a strange situation...the level 9 dumps vary wildly in size.
    Here's a fragment of a typical set of level 9 dumps:

    -rw-r--r-- 1 root wheel 21760000 Sep 16 04:01 sunday.usr
    -rw-r--r-- 1 root wheel 21841920 Sep 10 04:01 monday.usr
    -rw-r--r-- 1 root wheel 21913600 Sep 11 04:01 tuesday.usr
    -rw-r--r-- 1 root wheel 2851287040 Sep 12 04:01 wednesday.usr
    -rw-r--r-- 1 root wheel 2851287040 Sep 13 04:01 thursday.usr
    -rw-r--r-- 1 root wheel 110172160 Sep 14 04:01 friday.usr

    I've put them in chronological order to make it clearer. I should
    emphasise that the machine was creating very few, small, files, over
    this time. Yet suddenly, by Wednesday it had jumped the dump size by a
    factor of 100.

    If I restore the Wednesday dump (say) I get a file system that has a few
    tens of megabytes - not over 2GB! So WHY is the dump file so big?
    --
    Bob Eager
    UNIX since v6..
    http://tinyurl.com/2xqr6h


  2. Re: Stupidly large file created by dump(8)

    In article <176uZD2KcidF-pn2-YNdILRH86yvc@rikki.tavi.co.uk>
    "Bob Eager" writes:
    >
    >If I restore the Wednesday dump (say) I get a file system that has a few
    >tens of megabytes - not over 2GB! So WHY is the dump file so big?


    Are the dump files regular files? This just makes me remember
    dusty old warnings about things not to to with sparse files.
    If du says that they take up as much space as ls says, then all I
    can say is that it is an impressive problem.


    --
    Drew Lawson http://www.furrfu.com/ drew@furrfu.com

    In Dr. Johnson's famous dictionary patriotism is defined as the
    last resort of the scoundrel. With all due respect to an enlightened
    but inferior lexicographer I beg to submit that it is the first.
    -- Ambrose Bierce

  3. Re: Stupidly large file created by dump(8)

    On Tue, 18 Sep 2007 14:25:07 UTC, drew@furrfu.com (Drew Lawson) wrote:

    > In article <176uZD2KcidF-pn2-YNdILRH86yvc@rikki.tavi.co.uk>
    > "Bob Eager" writes:
    > >
    > >If I restore the Wednesday dump (say) I get a file system that has a few
    > >tens of megabytes - not over 2GB! So WHY is the dump file so big?

    >
    > Are the dump files regular files? This just makes me remember
    > dusty old warnings about things not to to with sparse files.
    > If du says that they take up as much space as ls says, then all I
    > can say is that it is an impressive problem.


    Yes, du agrees. In any case, dump is meant to do the right thing with
    sparse files.

    I tried gzipping one of the big files and it still comes out at 900M or
    so; I thought that would be a good idea in case dump had created soem
    stupid sparse internal data structure.

    --
    Bob Eager
    UNIX since v6..
    http://tinyurl.com/2xqr6h


  4. Re: Stupidly large file created by dump(8)

    On Sep 16, 10:43 pm, "Bob Eager" wrote:
    > I have a machine which gets a level 0 dump(8) run periodically. In
    > between (for various reasons) it gets a level 9 dump to a differently
    > named file each day. So, the level 0 plus the latest level 9 will be
    > sufficient to restore.
    >
    > I have a strange situation...the level 9 dumps vary wildly in size.
    > Here's a fragment of a typical set of level 9 dumps:
    >
    > -rw-r--r-- 1 root wheel 21760000 Sep 16 04:01 sunday.usr
    > -rw-r--r-- 1 root wheel 21841920 Sep 10 04:01 monday.usr
    > -rw-r--r-- 1 root wheel 21913600 Sep 11 04:01 tuesday.usr
    > -rw-r--r-- 1 root wheel 2851287040 Sep 12 04:01 wednesday.usr
    > -rw-r--r-- 1 root wheel 2851287040 Sep 13 04:01 thursday.usr
    > -rw-r--r-- 1 root wheel 110172160 Sep 14 04:01 friday.usr
    >
    > I've put them in chronological order to make it clearer. I should
    > emphasise that the machine was creating very few, small, files, over
    > this time. Yet suddenly, by Wednesday it had jumped the dump size by a
    > factor of 100.
    >
    > If I restore the Wednesday dump (say) I get a file system that has a few
    > tens of megabytes - not over 2GB! So WHY is the dump file so big?


    OK, found what caused this (replying via Google Groups since I don't
    have the original...)

    dump was originally designed for tape backups only, with -f allowing
    one to specify a different tape. Of course, these days it's commonly
    used to dump to a large file elsewhere (as I have). But it's still the
    same program, essentially, that I used about 30 years ago.

    dump does not truncate the output file when it opens it, but it uses
    O_CREAT, so it starts writing st the start of the file. This means
    that the backup file will always be as large as the largest backup
    ever made to that file; the contents of the file will be the latest
    dump, followed by fragments of older and larger dumps. These fragments
    are ignored by restore, since the internal structure of the new dump
    does not refer to them.

    While setting up the system, I did have large dumps on the Wednesday
    and Thursday, so that size has persisted.

    Solution, of course, is to remove the output file before running dump!


+ Reply to Thread