Help with savecore (HPUX 10.20) - HP UX

This is a discussion on Help with savecore (HPUX 10.20) - HP UX ; I have an HP 744i host adapter with HP-UX 10.20. It's an older system with an older SCSI hard drive, but I have a good 50+meg free. However, I'm trying to troubleshoot a hardware driver bug and the savecore is ...

+ Reply to Thread
Results 1 to 15 of 15

Thread: Help with savecore (HPUX 10.20)

  1. Help with savecore (HPUX 10.20)


    I have an HP 744i host adapter with HP-UX 10.20. It's an older system with
    an older SCSI hard drive, but I have a good 50+meg free. However, I'm
    trying to troubleshoot a hardware driver bug and the savecore is not saving
    the crash dump on reboot.

    When I try to do the same thing manually, using the /sbin/savecore -d
    option (per HP's website instructions), I'm getting the following error:

    savecore: missing device file: 1f006000

    Every try at doing anything with savecore gives this error. Any clues why
    I'm getting this and how to get around it? Thank you all in advance for
    any help!

    Regards,
    Bill Smith

  2. Re: Help with savecore (HPUX 10.20)

    Hi William

    > savecore: missing device file: 1f006000

    This means a /dev/dsk/c0t6d0 file.
    Does this disk exist in your system?
    Did it exist before?
    If it is LVM MirrorDisk: It might happen, if this is a mirrored system
    and the mirror half on /dev/dsk/c0t6d0 is broken and the system is up in
    lo-quorum mode.

    Florian

  3. Re: Help with savecore (HPUX 10.20)


    Florian,
    Thank you for taking an interest! I am not that familiar with file systems
    in Unix, but I recognize what you are pointing out. I am searching for
    "c0t6d0" and "1f006000" in /etc and all subdirectories without success. My
    single disk is /dev/dsk/c0t0d0, so I don't know where the c0t6d0 or
    1f006000 is coming from.

    Thank you again for any help you can provide!

    Regards,
    Bill Smith



    Florian Anwander wrote in
    news:3j4ts7Foh7kaU1@individual.net:

    > Hi William
    >
    >> savecore: missing device file: 1f006000

    > This means a /dev/dsk/c0t6d0 file.
    > Does this disk exist in your system?
    > Did it exist before?
    > If it is LVM MirrorDisk: It might happen, if this is a mirrored system
    > and the mirror half on /dev/dsk/c0t6d0 is broken and the system is up in
    > lo-quorum mode.
    >
    > Florian
    >



  4. Re: Help with savecore (HPUX 10.20)

    Hi William

    > Florian,
    > Thank you for taking an interest! I am not that familiar with file systems
    > in Unix, but I recognize what you are pointing out. I am searching for
    > "c0t6d0" and "1f006000" in /etc and all subdirectories without success. My
    > single disk is /dev/dsk/c0t0d0, so I don't know where the c0t6d0 or
    > 1f006000 is coming from.

    /etc does not contain disk informations

    first do the command
    # mount

    Does it show output like this:
    / on /dev/vg00/lvol1 defaults on Sat Apr 23 06:14:25 2005
    /var on /dev/vg00/lvol5 defaults on Sat Apr 23 06:14:34 2005
    /var/mail on /dev/vg00/lvol6 defaults on Sat Apr 23 06:14:34 2005
    /usr on /dev/vg00/lvol4 defaults on Sat Apr 23 06:14:34 2005
    /opt on /dev/vg00/lvol3 defaults on Sat Apr 23 06:14:34 2005
    ^^^^^^^^^^^^^^^This is urgent


    or does show something like this:
    / on /dev/dsk/c0t0d0 defaults on Fri Oct 29 07:26:05 2004
    ^^^^^^^^^^^^^^^This is urgent


    if a /dev/dsk/... is mounted, this is not LVM (Logical Volume Manager)
    if a /dev/vgXX/lvolY is mounted, we have LVM


    If you have LVM, then possibly a mirror (two disk contain parallel the
    same data for redundancy) was used once.
    Try as root the following command:
    # lvlnboot -v

    It displays a kind of bootsector. The output for a mirrored system might
    look like this:

    # lvlnboot -v
    Boot Definitions for Volume Group /dev/vg00:
    Physical Volumes belonging in Root Volume Group:
    /dev/dsk/c0t0d0 (8/16/5.0.0) -- Boot Disk
    /dev/dsk/c0t6d0 (8/16/5.6.0) -- Boot Disk
    Root: lvol1 on: /dev/dsk/c0t0d0
    /dev/dsk/c0t6d0
    Swap: lvol2 on: /dev/dsk/c0t0d0
    /dev/dsk/c0t6d0
    Dump: lvol2 on: /dev/dsk/c0t6d0, 0


    This is a mirror using two disks c0t0d0 and c0t6d0. The Root-entry is
    the partition[1] where from the machine starts to boot. The Swap is the
    partition where the swap is located. The Dump-Entry shows the partition
    on which a dump will be written in the case of a crash. You see, that
    root and swap direct to both parts of the mirror, and Dump only to one
    part. Now savecore relies on this entry and also the dump-mechanism
    relies on it.

    If c0t6d0 is broken now, savecore won't find the partition containing
    the dump (perhaps alreacdy the dump-mechnism did not find it to write it).

    you might have a look at the manpage of savecore. especially the Options
    -D and -O and also the section "Problems and Remedies"

    Florian


    Note:
    [1] it is not really a partition, but this does not matter here


  5. Re: Help with savecore (HPUX 10.20)


    Florian Anwander wrote in
    news:3j71mqFofpv0U1@individual.net:

    > Hi William
    >
    > first do the command
    > # mount
    >


    # mount
    / on /dev/dsk/c0t0d0 defaults on Sun Feb 19 18:06:52 2006

    * this is it... no LVM stuff. SAM doesn't report any LVM's either

    >
    > If c0t6d0 is broken now, savecore won't find the partition containing
    > the dump (perhaps alreacdy the dump-mechnism did not find it to write
    > it).
    >
    > you might have a look at the manpage of savecore. especially the
    > Options -D and -O and also the section "Problems and Remedies"
    >


    In the case of the crash I'm trying to analyze, here is the output:


    | System Panic:
    | B2352B HP-UX (B.10.20) #1: Sun Jun 9 08:03:38 PDT 1996
    | panic: (display==0xbf00, flags==0x0) m_copydata 3
    | ...
    | sync'ing disks (13 buffers to flush): 13 13 13 13 13 13 13 13 13 13 13
    | 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
    | 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
    | 13 13 13 13 (13 buffers to flush): 13
    | 13 buffers not flushed
    | 0 buffers still dirty
    | It was not possible for the kernel to find a process
    | that caused this crash.

    According to the savecore man page, the crash console messages should
    say where to find the crash dump (for the -D and -O options), but this
    doesn't seem to show it.

    Regards,
    Bill Smith

  6. Re: Help with savecore (HPUX 10.20)

    William Smith wrote:
    > Florian Anwander wrote in
    > news:3j71mqFofpv0U1@individual.net:
    >
    >
    >>Hi William
    >>
    >>first do the command
    >># mount
    >>

    >
    >
    > # mount
    > / on /dev/dsk/c0t0d0 defaults on Sun Feb 19 18:06:52 2006
    >
    > * this is it... no LVM stuff. SAM doesn't report any LVM's either
    >
    >
    >>If c0t6d0 is broken now, savecore won't find the partition containing
    >>the dump (perhaps alreacdy the dump-mechnism did not find it to write
    >>it).
    >>
    >>you might have a look at the manpage of savecore. especially the
    >>Options -D and -O and also the section "Problems and Remedies"
    >>

    >
    >
    > In the case of the crash I'm trying to analyze, here is the output:
    >
    >
    > | System Panic:
    > | B2352B HP-UX (B.10.20) #1: Sun Jun 9 08:03:38 PDT 1996
    > | panic: (display==0xbf00, flags==0x0) m_copydata 3
    > | ...
    > | sync'ing disks (13 buffers to flush): 13 13 13 13 13 13 13 13 13 13 13
    > | 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
    > | 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
    > | 13 13 13 13 (13 buffers to flush): 13
    > | 13 buffers not flushed
    > | 0 buffers still dirty
    > | It was not possible for the kernel to find a process
    > | that caused this crash.
    >
    > According to the savecore man page, the crash console messages should
    > say where to find the crash dump (for the -D and -O options), but this
    > doesn't seem to show it.
    >
    > Regards,
    > Bill Smith

    My .02. The system that you are running is not a correct HP-UX install.
    HP-UX 10.20 boot disks are supposed to be SCSI ID 6.
    c(ontroller)0t(arget)6d(isk)0. You are booting off of target 0 ( the
    lowest priority on the SCSI bus ) the controller is id 7, the highest,
    the boot disk is supposed to be next (6). I do not know the exact code,
    but I never tried to run a system disk off of anything besides ID 6, it
    didn't work right, at least with 10.x

  7. Re: Help with savecore (HPUX 10.20)

    In article , Alan D Johnson wrote:
    >> Florian Anwander wrote in


    >> # mount
    >> / on /dev/dsk/c0t0d0 defaults on Sun Feb 19 18:06:52 2006


    > My .02. The system that you are running is not a correct HP-UX install.
    > HP-UX 10.20 boot disks are supposed to be SCSI ID 6.
    > c(ontroller)0t(arget)6d(isk)0. You are booting off of target 0 ( the
    > lowest priority on the SCSI bus ) the controller is id 7, the highest,
    > the boot disk is supposed to be next (6). I do not know the exact code,
    > but I never tried to run a system disk off of anything besides ID 6, it
    > didn't work right, at least with 10.x


    AFAICT there never was anything but a _recommendation_ that the boot
    disk be the one with next highest priority after the controller itself.
    I'm fairly certain that I've seen a K-class configured so that the boot
    disk was something like 4, and 6 was the disk where the almost-realtime
    application lived.

    And right now I'm looking at one where there are _no_ disks on
    single-digit targets (well, except the CDROM on a separate bus)
    and it boots just fine from target 12. Yes, 10.20. It'd better be
    (or have been, by now) supported after all the service
    contracts it's been on...


    Of course, by now we've seen things like the L-class where the built-in
    disk slots have targets 0 and 2 only, and it's SCA so no jumpers either.
    Then again, _that_ thing doesn't boot 10.20 anyway.

    And it might have been different for workstations.


    --
    Mikko Nahkola
    #include
    #Not speaking for my employer. No warranty. YMMV.

  8. Re: Help with savecore (HPUX 10.20)

    Mikko Nahkola wrote:
    > In article , Alan D Johnson wrote:
    >
    >>>Florian Anwander wrote in

    >
    >
    >>># mount
    >>>/ on /dev/dsk/c0t0d0 defaults on Sun Feb 19 18:06:52 2006

    >
    >
    >>My .02. The system that you are running is not a correct HP-UX install.
    >>HP-UX 10.20 boot disks are supposed to be SCSI ID 6.
    >>c(ontroller)0t(arget)6d(isk)0. You are booting off of target 0 ( the
    >>lowest priority on the SCSI bus ) the controller is id 7, the highest,
    >>the boot disk is supposed to be next (6). I do not know the exact code,
    >>but I never tried to run a system disk off of anything besides ID 6, it
    >>didn't work right, at least with 10.x

    >
    >
    > AFAICT there never was anything but a _recommendation_ that the boot
    > disk be the one with next highest priority after the controller itself.
    > I'm fairly certain that I've seen a K-class configured so that the boot
    > disk was something like 4, and 6 was the disk where the almost-realtime
    > application lived.
    >
    > And right now I'm looking at one where there are _no_ disks on
    > single-digit targets (well, except the CDROM on a separate bus)
    > and it boots just fine from target 12. Yes, 10.20. It'd better be
    > (or have been, by now) supported after all the service
    > contracts it's been on...
    >
    >
    > Of course, by now we've seen things like the L-class where the built-in
    > disk slots have targets 0 and 2 only, and it's SCA so no jumpers either.
    > Then again, _that_ thing doesn't boot 10.20 anyway.
    >
    > And it might have been different for workstations.
    >
    >

    As you say it is recommended, I don't recall for sure either. Won't
    savecore normally try to save to swap if a dumpdir isn't defined? is any
    location defined in the /etc/rc.config.d/savecore file? Maybe it's just
    using the "default" installdir.

  9. Re: Help with savecore (HPUX 10.20)

    > When I try to do the same thing manually, using the /sbin/savecore -d
    >option (per HP's website instructions), I'm getting the following error:


    > savecore: missing device file: 1f006000


    Ok, there are several possible causes for the problem. Here I go from the simplest to the more complex.
    First of, try
    lvlnboot -v
    With my machine this gives:
    [ROOT]:lasagne:> lvlnboot -v
    Boot Definitions for Volume Group /dev/vg00:
    Physical Volumes belonging in Root Volume Group:
    /dev/dsk/c3t6d0 (10/0/15/1.6.0) -- Boot Disk
    Boot: lvol1 on: /dev/dsk/c3t6d0
    Root: lvol3 on: /dev/dsk/c3t6d0
    Swap: lvol2 on: /dev/dsk/c3t6d0
    Dump: lvol2 on: /dev/dsk/c3t6d0, 0

    Are the values at your machine correct ? Especially the 'Dump'-part ?

    If yes, then there is a second problem:
    When the machine boots and needs the Dump-path to dump out a dump, it doesn't look into the RBDA (reserved boot data area) where lvlnboot does gets it wisdom from, but it looks in the NVRAM. (I don't know where savecore at the cmd-line looks - and I don't want to try ;-)
    Whenever you changed the configuration, you need to update the NVRAM (otherwise you get errors in your boot log)
    ---- the error message here -----
    savecore: open failed /dev/dsk/c0t10d0: No such device or address
    savecore: could not open dump
    EXIT CODE: 1
    "/sbin/rc1.d/S440savecore start" FAILED
    ----------------------------------
    The method is as easy as it is brutal... TOC the machine.
    At the cmd-line "sync" the machine twice (to save your data). Immediately after that press the TOC button at the machine. This causes a panic and a reboot. At the same time the NVRAM is update from the RBDA.
    (Remember to clean the dump in /var/adm/crash)

    Problem 2:
    The result from lvlnboot -v is incorrect.

    NOTE:
    The following section is not for the faint of heart - if you fail, you will make the machine unbootable !
    First of get the information together, lvlnboot should show. (Mine are just examples, working for _my_ machine)

    1.
    Remove all the definitions for the VG from the RBDA.

    [iridium_ROOT]:>/sbin/lvrmboot -v -r /dev/vg01
    Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf

    The needed VG is the one, "lvlnboot -v" displayed ("Boot Definitions for Volume Group ...")

    Now everything is gone:;
    [iridium_ROOT]:>lvlnboot -v
    lvlnboot: The Boot Data Area is empty.
    Boot Definitions for Volume Group /dev/vg01:
    The Boot Data Area is empty.

    Now create all entries anew, check them and write them to RBDA.
    (Without that, the machine won't boot - at least not without ugly tricks)

    E.g.:
    [iridium_ROOT]:>lvlnboot -v -r /dev/vg01/lvol3
    lvlnboot: WARNING !! Creating a separate root volume, Use the -b option to the lvlnboot command to create a separate boot volume
    Boot Definitions for Volume Group /dev/vg01:
    Physical Volumes belonging in Root Volume Group:
    /dev/dsk/c2t15d0 (2/0/7.15.0) -- Boot Disk
    No Boot Logical Volume configured
    Root: lvol3 on: /dev/dsk/c2t15d0
    No Swap Logical Volume configured
    No Dump Logical Volume configured

    Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf
    [iridium_ROOT]:>lvlnboot -v -b /dev/vg01/lvol1
    Boot Definitions for Volume Group /dev/vg01:
    Physical Volumes belonging in Root Volume Group:
    /dev/dsk/c2t15d0 (2/0/7.15.0) -- Boot Disk
    Boot: lvol1 on: /dev/dsk/c2t15d0
    Root: lvol3 on: /dev/dsk/c2t15d0
    No Swap Logical Volume configured
    No Dump Logical Volume configured

    Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf
    [iridium_ROOT]:>lvlnboot -v -s /dev/vg01/lvol2
    Boot Definitions for Volume Group /dev/vg01:
    Physical Volumes belonging in Root Volume Group:
    /dev/dsk/c2t15d0 (2/0/7.15.0) -- Boot Disk
    Boot: lvol1 on: /dev/dsk/c2t15d0
    Root: lvol3 on: /dev/dsk/c2t15d0
    Swap: lvol2 on: /dev/dsk/c2t15d0
    No Dump Logical Volume configured

    Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf
    [iridium_ROOT]:>lvlnboot -v -d /dev/vg01/lvol2
    Boot Definitions for Volume Group /dev/vg01:
    Physical Volumes belonging in Root Volume Group:
    /dev/dsk/c2t15d0 (2/0/7.15.0) -- Boot Disk
    Boot: lvol1 on: /dev/dsk/c2t15d0
    Root: lvol3 on: /dev/dsk/c2t15d0
    Swap: lvol2 on: /dev/dsk/c2t15d0
    Dump: lvol2 on: /dev/dsk/c2t15d0, 0

    Now check it:

    Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf
    [iridium_ROOT]:>lvlnboot -v
    Boot Definitions for Volume Group /dev/vg01:
    Physical Volumes belonging in Root Volume Group:
    /dev/dsk/c2t15d0 (2/0/7.15.0) -- Boot Disk
    Boot: lvol1 on: /dev/dsk/c2t15d0
    Root: lvol3 on: /dev/dsk/c2t15d0
    Swap: lvol2 on: /dev/dsk/c2t15d0
    Dump: lvol2 on: /dev/dsk/c2t15d0, 0

    If it is alright, write it to the RBDA;
    [iridium_ROOT]:>lvlnboot -v -R
    Boot Definitions for Volume Group /dev/vg01:
    Physical Volumes belonging in Root Volume Group:
    /dev/dsk/c2t15d0 (2/0/7.15.0) -- Boot Disk
    Boot: lvol1 on: /dev/dsk/c2t15d0
    Root: lvol3 on: /dev/dsk/c2t15d0
    Swap: lvol2 on: /dev/dsk/c2t15d0
    Dump: lvol2 on: /dev/dsk/c2t15d0, 0

    Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf

    If you changed the dump-path in the course, you need to apply the first trick.

    "Alan D Johnson" schrieb im Newsbeitrag news:rYFze.36664$IX4.321@twister.nyc.rr.com...
    > My .02. The system that you are running is not a correct HP-UX install.
    > HP-UX 10.20 boot disks are supposed to be SCSI ID 6.
    > c(ontroller)0t(arget)6d(isk)0. You are booting off of target 0 ( the
    > lowest priority on the SCSI bus ) the controller is id 7, the highest,
    > the boot disk is supposed to be next (6). I do not know the exact code,
    > but I never tried to run a system disk off of anything besides ID 6, it
    > didn't work right, at least with 10.x


    That's definitely no problem - if you configure the system correct for the ID != 6.
    I've done this on several machines and didn't see any problems.
    (But see above)

    HTH - and good luck

    Martin


  10. Re: Help with savecore (HPUX 10.20)

    Hi William

    > # mount
    > / on /dev/dsk/c0t0d0 defaults on Sun Feb 19 18:06:52 2006
    >
    > * this is it... no LVM stuff. SAM doesn't report any LVM's either
    > [...]
    > In the case of the crash I'm trying to analyze, here is the output:
    >
    >
    > | System Panic:
    > | B2352B HP-UX (B.10.20) #1: Sun Jun 9 08:03:38 PDT 1996
    > | panic: (display==0xbf00, flags==0x0) m_copydata 3
    > | ...
    > | sync'ing disks (13 buffers to flush): 13 13 13 13 13 13 13 13 13 13 13
    > | 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
    > | 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
    > | 13 13 13 13 (13 buffers to flush): 13
    > | 13 buffers not flushed
    > | 0 buffers still dirty
    > | It was not possible for the kernel to find a process
    > | that caused this crash.
    >
    > According to the savecore man page, the crash console messages should
    > say where to find the crash dump (for the -D and -O options), but this
    > doesn't seem to show it.

    The following is just a guess, but worth a try:

    savecore -D 1f006000


    Florian

  11. Re: Help with savecore (HPUX 10.20)


    "Florian Anwander" schrieb im Newsbeitrag news:3jhhi9FpqcljU1@individual.net...
    > Hi William
    >
    > > # mount
    > > / on /dev/dsk/c0t0d0 defaults on Sun Feb 19 18:06:52 2006


    "Martin Jost" schrieb im Newsbeitrag news:datqoe$fu0$1@news.mch.sbs.de...
    >Ok, there are several possible causes for the problem. Here I go from the simplest to the more complex.
    >First of, try
    >lvlnboot -v


    Ok, forget about my lvlnboot-part. (for your purpose)
    I missed the fact, that you are using a whole disc layout - sorry.

    Florian corrected me on this in an email (thanks !)

    Martin


  12. Re: Help with savecore (HPUX 10.20)

    Martin Jost wrote:

    > "Florian Anwander" schrieb im Newsbeitrag news:3jhhi9FpqcljU1@individual.net...
    >
    >>Hi William
    >>
    >>
    >>># mount
    >>>/ on /dev/dsk/c0t0d0 defaults on Sun Feb 19 18:06:52 2006

    >
    >
    > "Martin Jost" schrieb im Newsbeitrag news:datqoe$fu0$1@news.mch.sbs.de...
    >
    >>Ok, there are several possible causes for the problem. Here I go from the simplest to the more complex.
    >>First of, try
    >>lvlnboot -v

    >
    >
    > Ok, forget about my lvlnboot-part. (for your purpose)
    > I missed the fact, that you are using a whole disc layout - sorry.
    >
    > Florian corrected me on this in an email (thanks !)
    >
    > Martin
    >

    At least William will realize there is a lot more to it then he realized
    .... You are daring aren't you? )

  13. Re: Help with savecore (HPUX 10.20)


    Alan, Florian, Martin, and Mikko;
    I appreciate the pointers! The 'lvlnboot' didn't work, of course, and
    the 'savecore -D 1f006000' said no such file. However, the tip about
    the SCSI ID might be useful. Maybe there's some setting I can't find on
    the system that is looking for a disk on SCSI ID 6. I can easily test
    this out tomorrow by changing the SCSI ID on the disk box.

    My most pressing issue now is an empty /var/adm/sw/products/INDEX file!
    If ya'll have any recommendations for restoring it, I'm all ears!

    Many thanks!

    Regards,
    Bill Smith



    Alan D Johnson wrote in
    news:Fb_Ae.1984$Y54.1334@twister.nyc.rr.com:

    > Martin Jost wrote:
    >
    >> "Florian Anwander" schrieb
    >> im Newsbeitrag news:3jhhi9FpqcljU1@individual.net...
    >>
    >>
    >> "Martin Jost" schrieb im Newsbeitrag
    >> news:datqoe$fu0$1@news.mch.sbs.de...
    >>
    >>>lvlnboot -v

    >>
    >>
    >> Ok, forget about my lvlnboot-part. (for your purpose)
    >> I missed the fact, that you are using a whole disc layout - sorry.
    >>
    >> Florian corrected me on this in an email (thanks !)
    >>
    >> Martin
    >>

    > At least William will realize there is a lot more to it then he
    > realized
    > .... You are daring aren't you? )



  14. Re: Help with savecore (HPUX 10.20)


    "William Smith" schrieb im Newsbeitrag news:Xns9693BEF332DAAbsmithiphasecom@199.171.54.21 3...
    >
    > My most pressing issue now is an empty /var/adm/sw/products/INDEX file!


    This file is called IPD (Installed Products Database) - helps to know if you try to google for it...

    > If ya'll have any recommendations for restoring it, I'm all ears!


    I found this in ITRC forums:
    ================= snip, snip, =====================

    Dave Unverhau Jun 3, 2004 17:51:26 GMT 10 pts

    --------------------------------------------------------------------------------
    Dan,

    Here's a process for building a new IPD:

    # cd /var/adm/sw/products
    # mv INDEX INDEX.bad
    # cd /tmp
    # vi void.psf
    product
    tag void
    fileset
    tag void
    :wq!
    # swpackage -s /tmp/void.psf
    # swinstall void
    # swremove void
    # rm void.psf
    # swremove -d void

    You should be back in business. Let us know how it goes!

    (I got this from a Response Center Engineer a couple of years ago and have seen it posted on the ITRC a few times since then...it's handy to keep around -- I've needed it!)

    Best Regards,

    Dave

    ========================== snip, snip ========================

    The trick seems to be that swinstall recreates the IPD.

    HTH (this time)

    Martin

    > > At least William will realize there is a lot more to it then he
    > > realized
    > > .... You are daring aren't you? )


    Yes - after getting my fingers burn more than once.
    E.g. it took me half an afternoon to set up my first HPUX 10.20 box and just one hour to botch the LVM completely afterwards... Luckily this was just a sandbox


  15. Re: Help with savecore (HPUX 10.20)


    Martin,
    Thanks for the info... I found that exact forum not long after I posted
    my message, and thought I had posted it on here, too. That did the
    trick and I'm back in biz.

    Regards,
    Bill Smith


    "Martin Jost" wrote in
    news:db872f$27u$1@news.mch.sbs.de:

    >
    > This file is called IPD (Installed Products Database) - helps to know
    > if you try to google for it...
    >
    >
    > I found this in ITRC forums:
    > ================= snip, snip, ====================
    > Dave Unverhau Jun 3, 2004 17:51:26 GMT 10 pts
    >
    > -----------------------------------------------------------------
    > --------- Dan,
    >
    > Here's a process for building a new IPD:
    >
    > # cd /var/adm/sw/products
    > # mv INDEX INDEX.bad
    > # cd /tmp
    > # vi void.psf
    > product
    > tag void
    > fileset
    > tag void
    >:wq!
    > # swpackage -s /tmp/void.psf
    > # swinstall void
    > # swremove void
    > # rm void.psf
    > # swremove -d void
    >
    > You should be back in business. Let us know how it goes!
    >
    > (I got this from a Response Center Engineer a couple of years ago and
    > have seen it posted on the ITRC a few times since then...it's handy to
    > keep around -- I've needed it!)
    >
    > Best Regards,
    >
    > Dave
    >


+ Reply to Thread