cpio files to remote server - SCO

This is a discussion on cpio files to remote server - SCO ; Hey Everybody, This is what I'm starting with to backup some data from one box to another: find /dir/data -depth -print |cpio -o |rcmd backup "cpio -ivdum" The job copies some data over every night. Right now, both servers are ...

+ Reply to Thread
Results 1 to 11 of 11

Thread: cpio files to remote server

  1. cpio files to remote server

    Hey Everybody,

    This is what I'm starting with to backup some data from one box to
    another:
    find /dir/data -depth -print |cpio -o |rcmd backup "cpio -ivdum"

    The job copies some data over every night. Right now, both servers are
    on the same LAN, but the backup server is going to be moved offsite,
    with a slower connection. Is there a way that I can add something like
    gzip into the command to compress the data before sending it to the
    remote server?

    Thanks for any ideas,
    Kevin Fleming


  2. Re: cpio files to remote server

    Kevin Fleming typed (on Tue, Sep 20, 2005 at 01:22:32PM -0700):
    | Hey Everybody,
    |
    | This is what I'm starting with to backup some data from one box to
    | another:
    | find /dir/data -depth -print |cpio -o |rcmd backup "cpio -ivdum"
    |
    | The job copies some data over every night. Right now, both servers are
    | on the same LAN, but the backup server is going to be moved offsite,
    | with a slower connection. Is there a way that I can add something like
    | gzip into the command to compress the data before sending it to the
    | remote server?

    Install and use rsync, which can compress if you tell it do do so.

    --
    JP

  3. Re: cpio files to remote server

    In article <1127247752.536290.93790@z14g2000cwz.googlegroups.c om>,
    Kevin Fleming wrote:
    >Hey Everybody,
    >
    >This is what I'm starting with to backup some data from one box to
    >another:
    >find /dir/data -depth -print |cpio -o |rcmd backup "cpio -ivdum"
    >
    >The job copies some data over every night. Right now, both servers are
    >on the same LAN, but the backup server is going to be moved offsite,
    >with a slower connection. Is there a way that I can add something like
    >gzip into the command to compress the data before sending it to the
    >remote server?


    Sure. Try:

    find /dir/data -depth -print |cpio -o | gzip |rcmd backup "gunzip | cpio -ivdum"

    John
    --
    John DuBois spcecdt@armory.com KC6QKZ/AE http://www.armory.com/~spcecdt/

  4. Re: cpio files to remote server

    Kevin Fleming wrote:
    > Hey Everybody,
    >
    > This is what I'm starting with to backup some data from one box to
    > another:
    > find /dir/data -depth -print |cpio -o |rcmd backup "cpio -ivdum"
    >
    > The job copies some data over every night. Right now, both servers are
    > on the same LAN, but the backup server is going to be moved offsite,
    > with a slower connection. Is there a way that I can add something like
    > gzip into the command to compress the data before sending it to the
    > remote server?
    >
    > Thanks for any ideas,
    > Kevin Fleming
    >


    In my experience, rsync will be far more efficient since it only
    transmits the changed parts of changed files.

  5. Re: cpio files to remote server

    "John DuBois" wrote in message
    news:11j20f5d53b4053@corp.supernews.com...
    > In article <1127247752.536290.93790@z14g2000cwz.googlegroups.c om>,
    > Kevin Fleming wrote:
    >>Hey Everybody,
    >>
    >>This is what I'm starting with to backup some data from one box to
    >>another:
    >>find /dir/data -depth -print |cpio -o |rcmd backup "cpio -ivdum"
    >>
    >>The job copies some data over every night. Right now, both servers are
    >>on the same LAN, but the backup server is going to be moved offsite,
    >>with a slower connection. Is there a way that I can add something like
    >>gzip into the command to compress the data before sending it to the
    >>remote server?

    >
    > Sure. Try:
    >
    > find /dir/data -depth -print |cpio -o | gzip |rcmd backup "gunzip |
    > cpio -ivdum"
    >
    > John
    > --
    > John DuBois spcecdt@armory.com KC6QKZ/AE
    > http://www.armory.com/~spcecdt/



    Thanks John, that works...any ideas on how to see the amount of data
    that
    was actually transmitted? I suppose I could just gzip all of the data
    to a
    file just on its own so I'd know what the compression is like, but
    thought
    someone might have a slicker way of doing it.

    "Ian Wilson" wrote in message
    news:...

    > In my experience, rsync will be far more efficient since it only


    > transmits the changed parts of changed files.



    Also, thanks to JP Radley and Ian Wilson for their rsync
    suggestions...I'll
    have to look into that.


    Thanks,
    Kevin Fleming


  6. Re: cpio files to remote server

    In article <1127313072.278072.186600@g44g2000cwa.googlegroups. com>,
    Kevin Fleming wrote:
    >"John DuBois" wrote in message
    >news:11j20f5d53b4053@corp.supernews.com...
    >> In article <1127247752.536290.93790@z14g2000cwz.googlegroups.c om>,
    >> Kevin Fleming wrote:
    >>>Hey Everybody,
    >>>
    >>>This is what I'm starting with to backup some data from one box to
    >>>another:
    >>>find /dir/data -depth -print |cpio -o |rcmd backup "cpio -ivdum"
    >>>
    >>>The job copies some data over every night. Right now, both servers are
    >>>on the same LAN, but the backup server is going to be moved offsite,
    >>>with a slower connection. Is there a way that I can add something like
    >>>gzip into the command to compress the data before sending it to the
    >>>remote server?

    >>
    >> Sure. Try:
    >>
    >> find /dir/data -depth -print |cpio -o | gzip |rcmd backup "gunzip |
    >> cpio -ivdum"

    >
    >Thanks John, that works...any ideas on how to see the amount of data
    >that
    >was actually transmitted?


    You could do:

    find /dir/data -depth -print |cpio -o | gzip | dd obs=1024k |
    rcmd backup "gunzip | cpio -ivdum"

    dd will report the number of (1MB) blocks that it writes ("records out").

    John
    --
    John DuBois spcecdt@armory.com KC6QKZ/AE http://www.armory.com/~spcecdt/

  7. Re: cpio files to remote server

    I know this is slightly off topic but some answers later in this chain
    mention gzip.

    Any thoughts as to whether that more or less efficient than the old uniz zip
    and unzip we have been using for years?

    I prefer the unix zip and unzip programs because they are completely
    compatible with the zip and unzip programs built into Windows XP and also
    with older zip programs.

    I know for sure that compress and pack (in Unix) are, in most instances,
    less efficient than zip and unzip for un-compiled programs and data files.
    They may all turn out to be equally efficient for binary files, however.

    All comments appreciated.

    Thanks,

    DAW
    ==================

    "Kevin Fleming" wrote in message
    news:1127247752.536290.93790@z14g2000cwz.googlegro ups.com...
    > Hey Everybody,
    >
    > This is what I'm starting with to backup some data from one box to
    > another:
    > find /dir/data -depth -print |cpio -o |rcmd backup "cpio -ivdum"
    >
    > The job copies some data over every night. Right now, both servers are
    > on the same LAN, but the backup server is going to be moved offsite,
    > with a slower connection. Is there a way that I can add something like
    > gzip into the command to compress the data before sending it to the
    > remote server?
    >
    > Thanks for any ideas,
    > Kevin Fleming
    >




  8. Re: cpio files to remote server

    SDS typed (on Sat, Sep 24, 2005 at 07:01:41PM +0000):
    | I know this is slightly off topic ....

    It's not at all off-topic.

    | Any thoughts as to whether [gzip is] more or less efficient than the
    | old uniz zip and unzip we have been using for years?

    Gzip is much more efficient than zip or compress, and bzip2 is still
    better.

    | I prefer the unix zip and unzip programs because they are completely
    | compatible with the zip and unzip programs built into Windows XP and also
    | with older zip programs.

    Winzip handles gzip, and the www.gzip.org page will point you to many
    other possiblities.

    | I know for sure that compress and pack (in Unix) are, in most instances,
    | less efficient than zip and unzip for un-compiled programs and data files.
    | They may all turn out to be equally efficient for binary files, however.

    They are not equally efficient for binary files any more than they are
    for text files.

    --
    JP

  9. Re: cpio files to remote server

    sds10@earthlink.net wrote:

    > I know this is slightly off topic but some answers later in this chain
    > mention gzip.
    >
    > Any thoughts as to whether that more or less efficient than the old uniz zip
    > and unzip we have been using for years?


    The algorithm used by `gzip` is one of the ones used by `zip`. `zip`
    supports several algorithms and tries to choose which one will produce
    the smallest output (actually the Unix port of `zip` just uses one
    algorithm, called "deflate").

    `gzip` and `bzip2` are single-file compressors: foo -> foo.gz or
    foo.bz2. `zip` and many others like it are combined archivers and
    compressors. Running `zip foo.zip foo bar baz` creates a single file,
    foo.zip, that contains those three files. The equivalent with `gzip`
    would be something like: `tar cf foo.tar foo bar baz; gzip foo.tar`.

    `zip` archive format is imperfect for Unix purposes: it doesn't store
    all Unix attributes, I don't think it stores directory permissions,
    stuff like that. As long as you keep those things in mind it's probably
    fine.

    > I prefer the unix zip and unzip programs because they are completely
    > compatible with the zip and unzip programs built into Windows XP and also
    > with older zip programs.


    There are so many newer archivers for Windows -- `zip` is rather
    archaic. Two of the most popular ones these days are `rar` and `7-zip`.
    I've been experimenting with these and have found that `7-zip` can get
    the best compression of any compressor I've ever tried. Note that I say
    "_can_" get. It has a lot of knobs you can twiddle. Its default
    compression is similar to that of `rar`. (I'm working on some long-term
    archival storage where minimizing size is more important than saving
    compression time. For typical backup tasks, a faster compressor that
    leaves a few percent on the table is probably more appropriate.)

    7-zip for Unix lives at http://p7zip.sourceforge.net/. I'm not aware of
    an OpenServer port (I've been fiddling with it on Windows & Linux).

    > I know for sure that compress and pack (in Unix) are, in most instances,
    > less efficient than zip and unzip for un-compiled programs and data files.
    > They may all turn out to be equally efficient for binary files, however.


    `pack` is an ancient algorithm that is always less efficient that the
    modern ones. `compress` is newer and more efficient, but still nowhere
    near the more modern compressors.

    >Bela<


  10. Re: cpio files to remote server

    On Sat, 24 Sep 2005 19:32:24 UTC, Jean-Pierre Radley
    wrote:

    > | Any thoughts as to whether [gzip is] more or less efficient than the
    > | old uniz zip and unzip we have been using for years?
    >
    > Gzip is much more efficient than zip or compress, and bzip2 is still
    > better.
    >
    >


    Actually, you have to decide what you mean by efficient. In the
    normal case of compression ratio, yes, bzip2 is more efficient than
    gzip. But it is less efficient in CPU usage. At least when I've
    uncompressed large source archives, it seems to take longer with
    bzip2.

    Even when using gzip, I make the decision many times about compression
    levels depending on time constraints, or the CPU later used to
    uncompress it. gzip -2, for instance, generally does fairly well, and
    works faster than -7 or -9.

    --


  11. Re: cpio files to remote server

    In article ,
    Kevin K wrote:
    >On Sat, 24 Sep 2005 19:32:24 UTC, Jean-Pierre Radley
    >wrote:
    >
    >> | Any thoughts as to whether [gzip is] more or less efficient than the
    >> | old uniz zip and unzip we have been using for years?
    >>
    >> Gzip is much more efficient than zip or compress, and bzip2 is still
    >> better.



    >Actually, you have to decide what you mean by efficient. In the
    >normal case of compression ratio, yes, bzip2 is more efficient than
    >gzip. But it is less efficient in CPU usage. At least when I've
    >uncompressed large source archives, it seems to take longer with
    >bzip2.


    >Even when using gzip, I make the decision many times about compression
    >levels depending on time constraints, or the CPU later used to
    >uncompress it. gzip -2, for instance, generally does fairly well, and
    >works faster than -7 or -9.


    In a discussion in an email list I get, it was pointed out that
    bzip2 is less efficient for small files than gzip. For such things
    as man pages, the bzip2 file is often larger than the gzip file.

    For under 3K, gzip wins. For 3K to 6K it's about even. And over
    6K bzip2 comes out ahead in space savings.

    Over 10K bzip2 is a hands-down winner.

    Bill

    --
    Bill Vermillion - bv @ wjv . com

+ Reply to Thread