About Restore and TMPDIR - BSD

This is a discussion on About Restore and TMPDIR - BSD ; _____ Hello All, A recent Bare Metal Restore (BMR) of a OBSD 4.0 system failed! Why? To get this particular BMR job done i needed three hard disks in the machine at the same time . This meant that i ...

+ Reply to Thread
Results 1 to 7 of 7

Thread: About Restore and TMPDIR

  1. About Restore and TMPDIR

    _____
    Hello All,

    A recent Bare Metal Restore (BMR) of a OBSD 4.0 system failed! Why? To
    get this particular BMR job done i needed three hard disks in the
    machine at the same time . This meant that i needed devices /dev/wd1*
    and /dev/wd2*. Guess what? The OBSD boot/rescue floppy (or CD) after
    release 3.8 or so lost those devices in the /dev directory. You either
    make them manually, or use a copy of release 3.6 which has the devices i
    needed in the /dev directory. Why were devices wd1*, wd2* and wd3*
    dropped in more recent versions?

    After i figured out that i should be using release 3.6 to do a restore
    job, i ran into another issue. Restore needs the /tmp directory as
    working storage. When i restore a large system, the /tmp directory in
    RAM fills up instantly. To solve this problem, the restore man page
    says to set environment variable TMPDIR=/some_large_mounted_directory.
    Fine. When i do this, the restore does not recognise the environment
    variable properly, and the restore fails as /tmp fills up. Yet, it
    shows up as set when i ask for it via "set". Is this a known problem?

    To work around it, i just delete the RAM based /tmp directory, and set a
    soft link to /some_large_mounted_directory, usually on the hard disk
    partition that holds the dumps i am restoring from.

    Any comments?

    --
    Regards / JCH

  2. Re: About Restore and TMPDIR

    jch wrote:
    > After i figured out that i should be using release 3.6 to do a restore
    > job, i ran into another issue. Restore needs the /tmp directory as
    > working storage. When i restore a large system, the /tmp directory in
    > RAM fills up instantly. To solve this problem, the restore man page
    > says to set environment variable TMPDIR=/some_large_mounted_directory.
    > Fine. When i do this, the restore does not recognise the environment
    > variable properly, and the restore fails as /tmp fills up. Yet, it
    > shows up as set when i ask for it via "set". Is this a known problem?
    >

    If the restore process is a subshell, you might have to export that
    variable.

    --
    clvrmnky

    Direct replies will be blacklisted. Replace "spamtrap" with my name to
    contact me directly.

  3. Re: About Restore and TMPDIR

    Clever Monkey wrote:
    > jch wrote:
    >> After i figured out that i should be using release 3.6 to do a restore
    >> job, i ran into another issue. Restore needs the /tmp directory as
    >> working storage. When i restore a large system, the /tmp directory in
    >> RAM fills up instantly. To solve this problem, the restore man page
    >> says to set environment variable TMPDIR=/some_large_mounted_directory.
    >> Fine. When i do this, the restore does not recognise the environment
    >> variable properly, and the restore fails as /tmp fills up. Yet, it
    >> shows up as set when i ask for it via "set". Is this a known problem?
    >>

    > If the restore process is a subshell, you might have to export that
    > variable.

    _____
    Of course. I keep forgetting to issue the export command! One should
    always do it for good measure after setting an environment variable.
    Thanks.
    --
    Regards / JCH

  4. Re: About Restore and TMPDIR

    jch wrote:
    > Clever Monkey wrote:
    >> jch wrote:
    >>> After i figured out that i should be using release 3.6 to do a
    >>> restore job, i ran into another issue. Restore needs the /tmp
    >>> directory as working storage. When i restore a large system, the
    >>> /tmp directory in RAM fills up instantly. To solve this problem, the
    >>> restore man page says to set environment variable
    >>> TMPDIR=/some_large_mounted_directory. Fine. When i do this, the
    >>> restore does not recognise the environment variable properly, and the
    >>> restore fails as /tmp fills up. Yet, it shows up as set when i ask
    >>> for it via "set". Is this a known problem?
    >>>

    >> If the restore process is a subshell, you might have to export that
    >> variable.

    >
    > Of course. I keep forgetting to issue the export command! One should
    > always do it for good measure after setting an environment variable.


    Typically, I'll decided what variables need to be exported to subshells,
    and group those together so it is clear what I am doing, withing
    cluttering up the subshell environment with garbage.

    e.g.:

    VAR1=foo
    VAR2=bar
    LOCALTMP="something local"
    BLAH=Arrgh

    export VAR1 VAR2 BLAH
    --
    clvrmnky

    Direct replies will be blacklisted. Replace "spamtrap" with my name to
    contact me directly.

  5. Re: About Restore and TMPDIR

    Clever Monkey wrote:

    >> Of course. I keep forgetting to issue the export command! One should
    >> always do it for good measure after setting an environment variable.

    >
    > Typically, I'll decided what variables need to be exported to subshells,
    > and group those together so it is clear what I am doing, withing
    > cluttering up the subshell environment with garbage.
    >
    > e.g.:
    >
    > VAR1=foo
    > VAR2=bar
    > LOCALTMP="something local"
    > BLAH=Arrgh
    >
    > export VAR1 VAR2 BLAH

    _____
    Good idea.

    Do you care to comment on the more significant issue below?
    > A recent Bare Metal Restore (BMR) of a OBSD 4.0 system failed! Why?
    > To get this particular BMR job done i needed three hard disks in the
    > machine at the same time . This meant that i needed devices /dev/wd1*
    > and /dev/wd2*. Guess what? The OBSD boot/rescue floppy (or CD) after
    > release 3.8 or so lost those devices in the /dev directory. You
    > either make them manually, or use a copy of release 3.6 which has the
    > devices i needed in the /dev directory. Why were devices wd1*, wd2*
    > and wd3* dropped in more recent versions?

    --
    Regards / JCH

  6. Re: About Restore and TMPDIR

    jch wrote:
    > Do you care to comment on the more significant issue below?
    >> A recent Bare Metal Restore (BMR) of a OBSD 4.0 system failed! Why? To
    >> get this particular BMR job done i needed three hard disks in the
    >> machine at the same time . This meant that i needed devices /dev/wd1*
    >> and /dev/wd2*. Guess what? The OBSD boot/rescue floppy (or CD) after
    >> release 3.8 or so lost those devices in the /dev directory. You
    >> either make them manually, or use a copy of release 3.6 which has the
    >> devices i needed in the /dev directory. Why were devices wd1*, wd2*
    >> and wd3* dropped in more recent versions?


    Yeah, that is a mystery to me. Obviously, all the wd devices are in
    4.0. I don't know about the rescue boot image, though. Which image are
    you using? I find it hard to believe that /dev/wd* is missing from
    cdrom40.fs, for example.

    Assuming the support is in that kernel image, I guess you can just make
    them by hand. Is MAKEDEV available on this image?
    --
    clvrmnky

    Direct replies will be blacklisted. Replace "spamtrap" with my name to
    contact me directly.

  7. Re: About Restore and TMPDIR

    Clever Monkey wrote:
    > jch wrote:
    >> Do you care to comment on the more significant issue below?
    >>> A recent Bare Metal Restore (BMR) of a OBSD 4.0 system failed! Why?
    >>> To get this particular BMR job done i needed three hard disks in the
    >>> machine at the same time . This meant that i needed devices /dev/wd1*
    >>> and /dev/wd2*. Guess what? The OBSD boot/rescue floppy (or CD) after
    >>> release 3.8 or so lost those devices in the /dev directory. You
    >>> either make them manually, or use a copy of release 3.6 which has the
    >>> devices i needed in the /dev directory. Why were devices wd1*, wd2*
    >>> and wd3* dropped in more recent versions?

    >
    > Yeah, that is a mystery to me. Obviously, all the wd devices are in
    > 4.0. I don't know about the rescue boot image, though. Which image are
    > you using? I find it hard to believe that /dev/wd* is missing from
    > cdrom40.fs, for example.

    _____
    No, only /dev/wd0* devices are on the RAM disk. I am using the floppy40
    image. I don't know what is on the cdrom40.fs boot image. It actually
    does not matter.

    > Assuming the support is in that kernel image, I guess you can just make
    > them by hand. Is MAKEDEV available on this image?

    _____
    Clever folks those OpenBSD developers! Problem solved. Here is what i
    found:
    1) OBSD release 4.0 only has wd devices 0*
    2) The /dev/MAKEDEV script exists
    3) To make the required devices for, say, wd1 you simply do "cd /dev/ ;
    sh MAKEDEV wd1". All devices are written to the current directory,
    /dev/in this case.

    I know that the install script makes devices on the hard disk at
    installation time. So, there is a call to the /dev/MAKEDEV script on
    the RAM disk somewhere.

    When i do bare metal restores, i now know to add a step to create
    devices in the /dev directory with /dev/MAKE. Simple enough. This is
    probably documented on the OpenBSD web site, but i have not yet bothered
    to look it up.
    --
    Regards / JCH

+ Reply to Thread