what to do about "cannot dump to dumpdev hd(1/41): space for only 0pages" - SCO

This is a discussion on what to do about "cannot dump to dumpdev hd(1/41): space for only 0pages" - SCO ; Brian Hutchinson wrote: > Steve M. Fabac, Jr. wrote: > >>> >>> The guy that worked on it before me mirrored the drive to a new SCSI. >>> I suspect that it is going into panic due to the drive ...

+ Reply to Thread
Page 2 of 2 FirstFirst 1 2
Results 21 to 27 of 27

Thread: what to do about "cannot dump to dumpdev hd(1/41): space for only 0pages"

  1. Re: what to do about "cannot dump to dumpdev hd(1/41): space for

    Brian Hutchinson wrote:
    > Steve M. Fabac, Jr. wrote:
    >
    >>>
    >>> The guy that worked on it before me mirrored the drive to a new SCSI.
    >>> I suspect that it is going into panic due to the drive or filesystem
    >>> being corrupt.

    >>
    >>
    >> Tell us more about this "mirrored the drive to a new SCSI"
    >>
    >> Is the old drive available? Did the old drive panic? How was the
    >> mirror accomplished?
    >>
    >> We can return to your other questions once you provide answers to the
    >> above questions.
    >>

    > I'm not sure what he used ... I'll ask and find out. The brand new SCSI
    > disk does the exact same thing the old one did. Divvy looks the same,
    > the same 3 bad blocks were complained about when I added the disk with
    > mkdev hd the first time, when I do fsck -ofull on /dev/part4 and
    > /dev/part5 they both have the same warning about the filesystem being
    > larger than the filesystem it is currently in etc. (which I exit out of
    > and don't continue) so at this point it looks like a good copy of the
    > original disk.
    >
    > I haven't pulled the trigger on doing anything destructive to this disk
    > until I've found all the parts to the puzzle and know what my options are.
    >
    > /dev/part2 says there is no filesystem.
    >
    > I think root was what I call /dev/part5 as when I mount it all it has is
    > a lost+found directory with tons of filenames that are big numbers for
    > the filename. I'll have to try this again as I don't remember, but I
    > think I grepped /dev/part5 for something that made me think it was root.


    Bela will likely chime in here, but I don't think it is possible to install
    SCO 5.0.5 with root anywhere but in part2. By default 0 is boot, 1 is swap,
    and 2 is root. Unless this is SCO UNIX 3.2v4.2 (pre SCO 5.0.0) in which
    case 0 is root and 1 is still swap.

    >
    >>
    >>>

    >
    >>
    >> Stop. DO NOT ATTEMPT TO RESTORE FROM TAPE TO ANY FILE SYSTEM ON THE
    >> SCSI DISK. If you feel you must restore from tape, get a new SCSI disk,
    >> run mkdev hd and create file systems on the new disk then restore to
    >> those.
    >> If you get lucky and one of the tapes has a full system backup of the
    >> root
    >> file system from the original SCSI disk, restore it the new SCSI disk and
    >> boot that disk. Then investigate the remaining tapes.

    > At this point I'm just figuring out what I have to work with. This box
    > controls a huge mail sorting machine and apparently the SCO box went
    > down right after they bought it.


    That's a whole different ball game. I was called to an AMF bowling center
    where they had a very customized SCO Xenix system running the pin setters.
    It was totally unlike any SCO I'd worked on.

    Bought new from the manufacturer? or used? If new, then you should call the
    manufacturer for technical support.

    Since it is dedicated to controlling a machine, I doubt that there is any
    critical data on the SCO disk. Check with the Manufacturer, make sure you
    have all disks, licenses, tapes?, etc. to re-install SCO and whatever software
    was provided to control the mail sorter, then re-install on your IDE disk or
    a new SCSI disk then have the client contract with the manufacturer to set
    it up as needed.


    It came with a few tapes and no media
    > .. no boot floppies, no rescue CD, nothing. So I'm trying to figure out
    > what is on the tapes and what the filesystems on the disk are so I can
    > try and match up what I see on the tape with where it should go on the
    > disk.
    >
    > All I have done so far is look.
    >>
    >> The only reason to fool with a disk is if there is important data on the
    >> disk that is not backed up to tape or other storage and has to be
    >> recovered.
    >>
    >> In another post, you indicated that the only partitions that fsck
    >> complains
    >> about is part4 and part5. The root partition is part2 and if it passes
    >> fsck
    >> ok, then you can mount it and explore its contents.

    > I'm not so sure since divvy reports no fs on /dev/part2.
    >
    > Does SCO not have a file like Linux's inittab that shows the devices and
    > where they are mounted? I look at the inittab of the IDE drive I have
    > that works and it doesn't help me in trying to figure out where to look
    > on the messed up SCSI drive to get a clue as to the original filesystem
    > layout. I thought I'd see entries for root, boot etc. but I don't.


    The /etc/default/filesys is the mount table consulted on boot up to mount
    file systems. All you will get from that is the /dev/name_of_part_X and
    the mount point. There is no information on what disk or partition the
    file system is located on. That's all encoded by the major/minor numbers
    for the /dev/file_system_name entry. There is nothing there that will tell
    you the correct size of the file system.

    >
    > I don't remember if I mentioned this or not but I used to administer a
    > SCO box back in 2001 but it has been too long ago and I've forgotten
    > most of what I knew about SCO ... now it is Linux and Solaris I work on.
    >
    >


    In another post you wrote: PANIC: srmountfn Error 22 mounting rootdev hd (1/42)
    cannot dump 32671 pages to dumpdev hd (1/41): space for only 0 pages

    This error is indicative of a corrupted super block on the root file system.
    (Note that 1/42 *IS* part2 on your disk.)

    See: http://unix.derkeiler.com/Newsgroups...4-09/0273.html

    Or search Google Groups with fsdb srmountfn

    In that post, I detail how to recover from srmountfn on a Xenix file system.
    since I have not been successful in using the fsdb on HTFS file systems,
    I can't say if the information is the same or not.
    --
    Steve Fabac
    S.M. Fabac & Associates
    816/765-1670

  2. Re: what to do about "cannot dump to dumpdev hd(1/41): space for

    FWIW: Try a google search for "kernel initialization check letters"
    and look at the H iinit section. Or an alternate "H iinit sco".

    The 0 pages evidently is a standard part of the panic message and
    doesn't mean anything.

    And notice the date part. It may mean nothing.

    What is the result of divvy /dev/part0? Or even /dev/hd10. Should
    show the divvy table. My 0 is EAFS and 20K blocks, 1 is nonfs and 64K
    blocks, 2 is HTFS and roughly 2M blocks. Default assigned boot, swap,
    and root.

    With the H iinit error, is it safe to do a fsck on the root, whatever
    it is presently named?

  3. Re: what to do about "cannot dump to dumpdev hd(1/41): space for


    > Bela will likely chime in here, but I don't think it is possible to install
    > SCO 5.0.5 with root anywhere but in part2. By default 0 is boot, 1 is swap,
    > and 2 is root. Unless this is SCO UNIX 3.2v4.2 (pre SCO 5.0.0) in which
    > case 0 is root and 1 is still swap.


    That is good to know. I'll grep my /dev/part2 device and see if I can
    smoke out anything that would point to root being there.

    The IT guy at the printing place found a page in one of the manuals
    with this breakdown which supports what you said:
    0 boot EAFS
    1 swap NON FS
    2 root HTFS
    3 eng HTFS
    4 eng2 HTFS
    5 eng3 HTFS
    6 recover NON FS
    7 d1057all Whole Disk

    When I did the mkdev hd, I remember seeing divvy show eng3 and the
    whole disk division and maybe boot but nothing else. This has been
    the challenge is figuring out what the original divvy table looked
    like so I can then either try repair those filesystems or restore them
    from tape (no documentation on what the tapes are so it is like
    working a puzzle).
    >


    > That's a whole different ball game. I was called to an AMF bowling center
    > where they had a very customized SCO Xenix system running the pin setters.
    > It was totally unlike any SCO I'd worked on.
    >
    > Bought new from the manufacturer? or used? If new, then you should call the
    > manufacturer for technical support.


    It is 1993/1994 Bell & Howell equipment that was acquired used ... the
    original company no longer exists. The IT guy at the printing place
    that bought it tried to work on it as well as a technician that
    services these. They then posted to our local Linux Users group to
    find Unix guru's and that is how I got involved. I've never been
    faced with quite this kind of puzzle before with SCO (dealt with
    similar things with Sun & Linux though). It appears it worked for a
    while and would get to the login prompt but not now. There is a tape
    of what I think is the root filesystem.
    >


    > > Does SCO not have a file like Linux's inittab that shows the devices and
    > > where they are mounted? I look at the inittab of the IDE drive I have
    > > that works and it doesn't help me in trying to figure out where to look
    > > on the messed up SCSI drive to get a clue as to the original filesystem
    > > layout. I thought I'd see entries for root, boot etc. but I don't.

    >
    > The /etc/default/filesys is the mount table consulted on boot up to mount
    > file systems. All you will get from that is the /dev/name_of_part_X and
    > the mount point. There is no information on what disk or partition the
    > file system is located on. That's all encoded by the major/minor numbers
    > for the /dev/file_system_name entry. There is nothing there that will tell
    > you the correct size of the file system.
    >
    >
    >
    > > I don't remember if I mentioned this or not but I used to administer a
    > > SCO box back in 2001 but it has been too long ago and I've forgotten
    > > most of what I knew about SCO ... now it is Linux and Solaris I work on.

    >
    > In another post you wrote: PANIC: srmountfn Error 22 mounting rootdev hd (1/42)
    > cannot dump 32671 pages to dumpdev hd (1/41): space for only 0 pages
    >
    > This error is indicative of a corrupted super block on the root file system.
    > (Note that 1/42 *IS* part2 on your disk.)
    >
    > See:http://unix.derkeiler.com/Newsgroups...c/2004-09/0273....
    >
    > Or search Google Groups with fsdb srmountfn
    >
    > In that post, I detail how to recover from srmountfn on a Xenix file system.
    > since I have not been successful in using the fsdb on HTFS file systems,
    > I can't say if the information is the same or not.

    Outstanding! Thanks. I've found online SCO Companion books too so
    I've started to hit those.
    I wonder if any of the HTFS versioning stuff can be leveraged in
    situations like this ... still reading.

    I'm going to image the good SCSI to another disk (I'm sure I have one
    around here somewhere) so I can start to try things as I don't want to
    mess with that one (it's the baseline) and I don't trust the original
    SCSI as it probably is failing which is what lead to all this.

  4. Re: what to do about "cannot dump to dumpdev hd(1/41): space for

    On Sep 17, 12:16 am, ed wrote:
    > FWIW: Try a google search for "kernel initialization check letters"
    > and look at the H iinit section. Or an alternate "H iinit sco".

    Good suggestion. I just read a section about H last night in the SCO
    companion. If I remember right it is the first place superblocks and
    inodes are referenced which is why it is going into a panic I think.

    >
    > The 0 pages evidently is a standard part of the panic message and
    > doesn't mean anything.
    >
    > And notice the date part. It may mean nothing.
    >
    > What is the result of divvy /dev/part0? Or even /dev/hd10. Should
    > show the divvy table. My 0 is EAFS and 20K blocks, 1 is nonfs and 64K
    > blocks, 2 is HTFS and roughly 2M blocks. Default assigned boot, swap,
    > and root.
    >
    > With the H iinit error, is it safe to do a fsck on the root, whatever
    > it is presently named?

    I did a fsck -ofull on /dev/part0 (boot) and it was happy. I don't
    remember trying it on /dev/part2 as I thought divvy reported NON FS.

    I need to image the disk I'm using as my baseline to another drive so
    I can try more things.

    Thanks!

    Brian


  5. Re: what to do about "cannot dump to dumpdev hd(1/41): space for

    b.hutchman@gmail.com wrote:
    >> Bela will likely chime in here, but I don't think it is possible to install
    >> SCO 5.0.5 with root anywhere but in part2. By default 0 is boot, 1 is swap,
    >> and 2 is root. Unless this is SCO UNIX 3.2v4.2 (pre SCO 5.0.0) in which
    >> case 0 is root and 1 is still swap.

    >
    > That is good to know. I'll grep my /dev/part2 device and see if I can
    > smoke out anything that would point to root being there.
    >
    > The IT guy at the printing place found a page in one of the manuals
    > with this breakdown which supports what you said:
    > 0 boot EAFS
    > 1 swap NON FS
    > 2 root HTFS
    > 3 eng HTFS
    > 4 eng2 HTFS
    > 5 eng3 HTFS
    > 6 recover NON FS
    > 7 d1057all Whole Disk
    >
    > When I did the mkdev hd, I remember seeing divvy show eng3 and the
    > whole disk division and maybe boot but nothing else. This has been
    > the challenge is figuring out what the original divvy table looked
    > like so I can then either try repair those filesystems or restore them
    > from tape (no documentation on what the tapes are so it is like
    > working a puzzle).


    Did divvy show the start and end blocks for the unnamed file systems?
    (Please post the divvy table you see. Second request!) If not, you have
    a big problem as trying to find the start of a division is not
    a trivial matter.

    In another post you said:

    > I'm not sure what he used ... I'll ask and find out. The brand new SCSI
    > disk does the exact same thing the old one did. Divvy looks the same,
    > the same 3 bad blocks were complained about when I added the disk with
    > mkdev hd the first time, when I do fsck -ofull on /dev/part4 and /dev/part5
    > they both have the same warning about the filesystem being larger than the
    > filesystem it is currently in etc. (which I exit out of and don't continue)
    > so at this point it looks like a good copy of the original disk.


    The warning that the file system is larger then the space allocated
    should not be fatal. Fsck is warning you that you should take steps to
    correct the problem. I've see that before when a client used Microlite Backup
    Edge to move his system from an 9G disk to an 18G disk and answered "percentage"
    when asked by the RE2 software how he wanted the partitions resized to fit the
    new disk: "Size" keeps the fdisk partition and divvy file system the same size
    as the original disk, "percentage" expands the fdisk partition and divvy file systems
    as it can to utilize the additional disk space on the new disk.

    Go ahead and run "fsck -n -ofull /dev/part3 > /tmp/logfsck 2>&1" That will not
    alter the file system and log its results to /tmp/logfsck. Check to see what
    fsck tells you about the file system.

    Note that even if the file system is not "clean" you can mount it with -r (read only)
    and create a current backup to tape. If fsck throws a lot of errors beyond:
    "UNREF FILE I=inode-number OWNER=UID MODE=file-mode SIZE=file-size
    MTIME=modification-time (RECONNECT)" you might not want to trust what you can read
    from the read-only file system. If logfsck shows only minor problems, go ahead and
    run fsck without the -n and let it fix what it can.

    With all the information you have provided I'd suggest the following
    sequence:

    1) restore the suspected root backup tape to a file system (not the root)
    on the IDE disk you are using to try to mount the problem disk. Use
    dtype /dev/rStp0 to check the format of the tape (cpio, tar, etc.) Pray
    that it is not tar as tar is inadequate for backing up the root file system
    as it will not backup /dev nodes.

    2) Figure out how old the root back up is. 1993-95? This century? You decide if
    you can trust the backup to have all the information you need to replicate
    the running system to a new disk.

    3) Check the crontab on the restored root to see if you can identify any
    scheduled backup that might have been set up. If you're lucky, they included
    one of the supertar products to perform the backup. If it's a home brew
    backup, check the script and see if it logs any information about the disk
    layout in the backup log.

    (No help to you but when I used to use my cpio script to backup systems
    I always included the output of dfspace in the backup log:

    > Start CPIO tape write: Thu Oct 07 12:29:22 2002
    >
    > / : Disk space: 1538.45 MB of 3678.40 MB available (41.82%).
    > /app1 : Disk space: 2987.78 MB of 6718.58 MB available (44.47%).
    > /app2 : Disk space: 2455.14 MB of 6718.58 MB available (36.54%).
    > /stand : Disk space: 27.77 MB of 39.99 MB available (69.43%).
    >
    > Total Disk Space: 7009.15 MB of 17155.58 MB available (40.86%).
    >
    > Root Disk Division:
    > 0: boot 8033 48991
    > 1: swap 48992 345950
    > 2: root 345951 4112636
    > 7: hd0a 0 4120671


    The dfspace information would at least tell you the sizes of the
    various mounted file system so that you can experiment with creating
    file systems of the same size to get the block size of the file systems.
    I have since moved all my clients to Microlite Backup Edge which creates
    RE2 ISO image with all that information recorded for you.)

    4) Failing to find anything useful from /usr/spool/cron/crontabs/root, look around
    and see if you can find any usable information to help you.

    5) If you can dope out the original size of the root file system (I'd use
    whatever divvy indicates for the start and end block for division 2),
    create a new file system the same size on the IDE disk and then use:

    "dd if=/dev/correct_file_system of=/tmp/sb bs=1k count=1"

    to grab the superblock from the just created file system and then
    write it to the /dev/part2 (damaged root file system):

    "dd if=/tmp/sb of=/dev/part2 bs=1k" (Keep a copy of the original /dev/part2

    superblock before you do this so if you get it wrong, you can go back and
    try again.) Then run fsck on the /dev/part2 file system (use -n until you
    see what fsck says).

    NOTE: I have never done this on an HTFS file system. It may or may not work.


    >
    >> That's a whole different ball game. I was called to an AMF bowling center
    >> where they had a very customized SCO Xenix system running the pin setters.
    >> It was totally unlike any SCO I'd worked on.
    >>
    >> Bought new from the manufacturer? or used? If new, then you should call the
    >> manufacturer for technical support.

    >
    > It is 1993/1994 Bell & Howell equipment that was acquired used ... the
    > original company no longer exists. The IT guy at the printing place
    > that bought it tried to work on it as well as a technician that
    > services these. They then posted to our local Linux Users group to
    > find Unix guru's and that is how I got involved. I've never been
    > faced with quite this kind of puzzle before with SCO (dealt with
    > similar things with Sun & Linux though). It appears it worked for a
    > while and would get to the login prompt but not now. There is a tape
    > of what I think is the root filesystem.
    >
    >>> Does SCO not have a file like Linux's inittab that shows the devices and
    >>> where they are mounted? I look at the inittab of the IDE drive I have
    >>> that works and it doesn't help me in trying to figure out where to look
    >>> on the messed up SCSI drive to get a clue as to the original filesystem
    >>> layout. I thought I'd see entries for root, boot etc. but I don't.

    >> The /etc/default/filesys is the mount table consulted on boot up to mount
    >> file systems. All you will get from that is the /dev/name_of_part_X and
    >> the mount point. There is no information on what disk or partition the
    >> file system is located on. That's all encoded by the major/minor numbers
    >> for the /dev/file_system_name entry. There is nothing there that will tell
    >> you the correct size of the file system.
    >>
    >>
    >>
    >>> I don't remember if I mentioned this or not but I used to administer a
    >>> SCO box back in 2001 but it has been too long ago and I've forgotten
    >>> most of what I knew about SCO ... now it is Linux and Solaris I work on.

    >> In another post you wrote: PANIC: srmountfn Error 22 mounting rootdev hd (1/42)
    >> cannot dump 32671 pages to dumpdev hd (1/41): space for only 0 pages
    >>
    >> This error is indicative of a corrupted super block on the root file system.
    >> (Note that 1/42 *IS* part2 on your disk.)
    >>
    >> See:http://unix.derkeiler.com/Newsgroups...c/2004-09/0273....
    >>
    >> Or search Google Groups with fsdb srmountfn
    >>
    >> In that post, I detail how to recover from srmountfn on a Xenix file system.
    >> since I have not been successful in using the fsdb on HTFS file systems,
    >> I can't say if the information is the same or not.

    > Outstanding! Thanks. I've found online SCO Companion books too so
    > I've started to hit those.
    > I wonder if any of the HTFS versioning stuff can be leveraged in
    > situations like this ... still reading.
    >
    > I'm going to image the good SCSI to another disk (I'm sure I have one
    > around here somewhere) so I can start to try things as I don't want to
    > mess with that one (it's the baseline) and I don't trust the original
    > SCSI as it probably is failing which is what lead to all this.
    >
    >


    --
    Steve Fabac
    S.M. Fabac & Associates
    816/765-1670

  6. Re: what to do about "cannot dump to dumpdev hd(1/41): space for

    Steve M. Fabac, Jr. wrote:

    > Bela will likely chime in here, but I don't think it is possible to install
    > SCO 5.0.5 with root anywhere but in part2. By default 0 is boot, 1 is swap,
    > and 2 is root. Unless this is SCO UNIX 3.2v4.2 (pre SCO 5.0.0) in which
    > case 0 is root and 1 is still swap.


    I don't really enjoy being "invoked" like that...

    It's possible to install OSR5 onto any division of any partition. To
    install anywhere other than the default division-2-of-active-partition
    requires hackery that few people know; the original poster's system
    probably isn't that strange.

    I haven't read the very latest posts on this thread yet, but so far it
    looks like everyone is missing the probable cause here. It looks like
    the drive's perceived geometry has changed. The OP should first look at
    his live system, at the files /usr/adm/hwconfig and /usr/adm/syslog, to
    get a feel for what these look like. For instance:

    $ grep cyls /usr/adm/hwconfig
    %disk 0x01F0-0x01F7 14 - type=W0/0 unit=0 cyls=60321 hds=255 secs=127
    %Sdsk - - - cyls=4462 hds=255 secs=63 fts=stdb
    %Sdsk - - - cyls=17849 hds=255 secs=63 fts=stdb
    %disk 0x01F0-0x01F7 14 - type=W0/1 unit=1 cyls=60801 hds=255 secs=63

    This system has two SCSI and two IDE drives, thus the two formats.
    Next:

    $ strings /usr/adm/hwconfig | grep cyls
    14 - type=W0/0 unit=0 cyls=60321 hds=255 secs=127
    - - cyls=4462 hds=255 secs=63 fts=stdb
    - - cyls=17849 hds=255 secs=63 fts=stdb
    14 - type=W0/1 unit=1 cyls=60801 hds=255 secs=63

    Notice that `strings` cuts of the "%what" part -- this is because it's
    separated by a TAB char, and `strings` thinks TAB isn't a printable
    char. Now you can do:

    # strings /dev/hd20 | grep cyls=

    (where hd20 is the old drive's whole-disk device node), and you'll
    likely get a _lot_ of output, some of which will be the original
    geometry of the drive. Other parts will be various sorts of noise, so
    it's up to you to separate the wheat from the chaff. If there were
    multiple drives on the old system, you'll see several sets of geometry
    to choose from. Multiply (cyls * hds * secs * 512) to get the size in
    bytes; for the above 4, I get:

    1,000,189,739,520 (~1TB)
    36,701,199,360 (~36GB)
    146,813,022,720 (~146GB)
    500,105,249,280 (~500GB)

    Calculate this and pick out which size correctly applies to the make &
    model of the old drive.

    Now look at /usr/adm/hwconfig on the new system. What is it giving for
    the old drive's geometry?

    If it matches your discovered geometry from the old drive, I'm wrong,
    this isn't a geometry problem. Report that and we have a new basis for
    further discovery.

    If it doesn't match, it's a geometry problem. Then we need to think
    about how to fix it.

    Two classes of geometry problem. One is a case where the "hds" and
    "secs" values match, but the "cyls" is much too low. This happens when
    you have a drive larger than is supported by the new HBA driver. For
    instance, a previous version of the LSI Logic 53c1030 driver, "lsil",
    couldn't handle drives larger than 64GiB. It saw drive sizes as (actual
    size mod 64GiB), so a 100GiB drive showed up as 36GiB, etc. The fix for
    this is (1) update to the newest version of the driver for that HBA
    (that fixes the "lsil" case); (2) if no newer driver exists, throw out
    the HBA, get one with a working driver.

    The other class of geometry problem is where the numbers just don't
    match at all. Look at the 36GB drive:

    %Sdsk - - - cyls=4462 hds=255 secs=63 fts=stdb

    Many HBAs like a geometry of 64 heads, 32 sectors/track. They would
    show this same drive as:

    %Sdsk - - - cyls=35000 hds=64 secs=32 fts=stdb

    (probably 35001 cylinders). In such a case, what you have to do is
    "stamp" the drive with the correct (original) geometry so OSR5 will know
    how to find stuff on it. This used to be very easy, you would just:

    # dparam -w /dev/rhd20 # where /dev/rhd20 is the drive in question
    # dparam /dev/rhd20 `dparam /dev/rhd20`

    It's still that easy unless your new system is OSR507. 507 shipped with
    a broken masterboot which makes this more difficult. I believe that's
    been fixed along the way, so if it's 507, update the new system with
    OSR507MP5 before proceeding. Then do the above commands.

    After "stamping", reboot, then go back to `divvy` and see if the
    filesystem sizes & types make any more sense. Run `dtype /dev/part1`
    on each division's device node; do those filesystem types make sense?
    Finally, run `fsck -n -o full /dev/part1` on each of the divisions that
    looks like a mountable filesystem type. Do they still whine about wrong
    sizes?

    Ideally you should be doing all of this on a copy of the original data,
    mirrored onto another drive of the same (or larger) size. You can do
    the mirroring with something like Ghost or simply by:

    # dd if=/dev/hd20 of=/dev/hd30 bs=64k

    where hd20 & hd30 are the whole-disk device nodes for two drives. This
    command is dangerous: you need to be 150% certain that /dev/hd30 is
    really the trashable new empty drive!

    BTW, nothing prevents you from using an IDE/SATA drive as the mirror.
    The OSR5 "wd" driver tends to choose different geometries than many of
    the SCSI HBA drivers, but this is irrelevant since you will be stamping
    the new drive with the original drive's geometry.

    What we refer to as "disk geometry" these days has nothing to do with
    the actual number of cylinders, heads & sectors/track of the drive, it's
    just an accounting trick to help the OS keep track of where things are
    on the drive. For a new drive, the only requirements for the chosen
    geometry are that it fit within constraints (<256 heads, <256 sectors,
    <65536 cylinders) -- and that it multiply out to the actual size of the
    drive.

    That last requirement is actually almost never met, since the drive
    is unlikely to have exactly [some number < 256] * [some number < 256]
    * [some number < 65536] total sectors. The geometry must multiply
    out to the drive's exact size _or less_, which is what almost always
    happens. For a drive that already has partitions and divisions on it,
    the requirement is: geometry Must Not Change even if you move the drive
    from one HBA to another, or move the logical contents of the drive to
    another drive using an image/mirroring technology like Ghost or `dd`.

    OSR5 tries to enforce the Must Not Change clause by stamping SCSI
    drives at install time. Three things go wrong with this: it doesn't
    stamp IDE drives; stamping of SCSI drives was added relatively late
    (506?); and the broken masterboot in 507 prevents the stamping from
    working. And possibly a 4th problem (I'm not sure about this one): I'm
    sure it tries to stamp the root drive at install time (subject to the
    other 3 constraints), but I'm not too sure about drives added after
    installation. Not sure if `mkdev hd` also tries to stamp new drives.

    Starting with OSR507MP1 or MP2 or so, a new utility `geotab` stamps
    drives with a new kind of geometry stamp which is supposed to be more
    resiliant to movement between HBAs and/or imaging to different drive
    types. Again, I forget whether it includes a tweaked `mkdev hd` script
    that does this newfangled stamping to newly installed drives. (I should
    remember since I invented the new stamp, wrote the utility that
    implements it, and wrote the script that stamps all existing drives
    during the installation of the supplement that adds `geotab`. But it's
    been long enough that I can't say for sure, even though I know it
    _should_ have tweaked `mkdev hd`...)

    (Ok, I downloaded the newest wd BTLD and I still don't know the answer.
    It replaces one of the mkdev scripts (.scsi), but the new version
    doesn't mention either `dparam` or `geotab`. But it's also possible
    that the _kernel_ stamps a geotab onto new drives. If not, something
    probably should -- probably `mkdev hd`)

    Hmmm, I've written a tome, will bcc to Tony Lawrence...

    >Bela<


  7. Re: what to do about "cannot dump to dumpdev hd(1/41): space for

    On Sep 18, 12:43*am, Bela Lubkin wrote:

    >
    > Hmmm, I've written a tome, will bcc to Tony Lawrence...
    >
    > >Bela<



    Thanks. I posted that at http://aplawrence.com/Bofcusm/2662.html but
    probably wouldn't have noticed it if you hadn't done the cc. My SCO
    work has dwindled away to almost nothing and I seldom visit this
    group.


    --
    Tony Lawrence
    http://aplawrence.com


+ Reply to Thread
Page 2 of 2 FirstFirst 1 2