lost my volume group - Veritas Volume Manager

This is a discussion on lost my volume group - Veritas Volume Manager ; I had several volumes in a volume group. I stopped one of the volumes and moved it to another host and did an import. The volume showed up on the new host just fine, but so did all the other ...

+ Reply to Thread
Results 1 to 4 of 4

Thread: lost my volume group

  1. lost my volume group


    I had several volumes in a volume group. I stopped one of the volumes and
    moved it to another host and did an import. The volume showed up on the
    new host just fine, but so did all the other volumes that I dont need on
    the new host so I deleted them. well, a couple days go by and I have to
    put that volume back on the original host. I moved the disks back and the
    volume reappeared on the original host, however I lost all my other volumes.
    Is there any way I can get those back? the disks are fine, but when I do
    a vxdisk list c1t4d0s0 (for example) it still knows it's in that volume group
    but it has nothing in the name field.

    any help would be greatly appreciated!

  2. Re: lost my volume group

    OK, let me explain something.

    The volume information is NOT stored on the host, but on the disks.

    This means that is you delete (remove) volumes on a machine, it actually
    removes all traces from the disks (the data might still be there, but
    the Volume Manager configuration that knows how to access the data is
    gone). So, unless you have an old vxprint or saved your configuration,
    sorry, old volumes are gone.

    On the new machine you should have just NOT started the volumes you did
    not need. This would have prevented access, but not remove them from the
    configuration permanently.

    OK, so now, how to get it back ?

    Hope you have a old vxprint output, or if this is VM 4.0 or later, the
    diskgrou info will still be in the /etc/vx/cbr directory. Will try and
    help you to get it back if you can post either here

    Mike Kuriger wrote:
    > I had several volumes in a volume group. I stopped one of the volumes and
    > moved it to another host and did an import. The volume showed up on the
    > new host just fine, but so did all the other volumes that I dont need on
    > the new host so I deleted them. well, a couple days go by and I have to
    > put that volume back on the original host. I moved the disks back and the
    > volume reappeared on the original host, however I lost all my other volumes.
    > Is there any way I can get those back? the disks are fine, but when I do
    > a vxdisk list c1t4d0s0 (for example) it still knows it's in that volume group
    > but it has nothing in the name field.
    >
    > any help would be greatly appreciated!


  3. Re: lost my volume group


    I was able to get a copy of vxprint by importing the volume group on another
    host with some disks I removed last week, so with this information I'm hoping
    I can rebuild these subdisks and plex and volume manually. thanks so much
    for the help! I'll just paste in the information for the volume I'm interested
    in getting back.

    ~mike~

    Disk group: icebox

    Group: icebox
    info: dgid=1035323826.1295.hound
    version: 90
    activation: read-write
    detach-policy: global
    copies: nconfig=default nlog=default
    minors: >= 48000

    Disk: v5_01
    info: diskid=1077056775.2684.hound
    assoc: nodevice (last known device: c16t4d0s2)

    Disk: v5_02
    info: diskid=1077056791.2692.hound
    assoc: nodevice (last known device: c3t4d0s2)

    Disk: v5_03
    info: diskid=1077056794.2694.hound
    assoc: nodevice (last known device: c4t4d0s2)

    Disk: v5_04
    info: diskid=1077056797.2696.hound
    assoc: nodevice (last known device: c19t4d0s2)

    Disk: v5_05
    info: diskid=1077056800.2698.hound
    assoc: nodevice (last known device: c18t4d0s2)

    Disk: v5_06
    info: diskid=1077056803.2700.hound
    assoc: nodevice (last known device: c19t8d0s2)

    Disk: v5_07
    info: diskid=1077056806.2702.hound
    assoc: nodevice (last known device: c18t8d0s2)

    Disk: v5_08
    info: diskid=1077056810.2704.hound
    assoc: nodevice (last known device: c16t8d0s2)

    Disk: v5_09
    info: diskid=1077056781.2686.hound
    assoc: nodevice (last known device: c19t9d0s2)

    Disk: v5_10
    info: diskid=1077056768.2682.hound
    assoc: nodevice (last known device: c18t9d0s2)

    Disk: v5_11
    info: diskid=1077056784.2688.hound
    assoc: nodevice (last known device: c16t9d0s2)

    Disk: v5_12
    info: diskid=1077056787.2690.hound
    assoc: nodevice (last known device: c19t10d0s2)

    Disk: v5_13
    info: diskid=1121802238.3449.hound
    assoc: nodevice (last known device: c18t5d0s2)

    Disk: v5_14
    info: diskid=1121802234.3447.hound
    assoc: nodevice (last known device: c18t10d0s2)

    Disk: v5_15
    info: diskid=1126552615.3550.hound
    assoc: nodevice (last known device: c16t5d0s2)

    Disk: v5_16
    info: diskid=1126552619.3552.hound
    assoc: nodevice (last known device: c18t13d0s2)

    Disk: v5_17
    info: diskid=1126552623.3554.hound
    assoc: nodevice (last known device: c3t12d0s2)



    Volume: vol05
    info: len=4587485696
    type: usetype=raid5
    state: state=ACTIVE kernel=DISABLED cdsrecovery=0/0 (clean)
    assoc: plexes=vol05-01
    policies: read=RAID exceptions=NO_OP
    flags: closed degraded unusable writeback
    logging: type=RAID5 loglen=0 serial=0/0 (disabled)
    apprecov: seqno=0/0
    recovery: mode=UNKNOWN
    recov_id=0
    device: minor=48006 bdev=209/48006 cdev=209/48006 path=/dev/vx/dsk/icebox/vol05
    perms: user=root group=root mode=0600

    Plex: vol05-01
    info: len=0 (sparse)
    type: layout=RAID columns=17 width=32
    state: state=ACTIVE kernel=DISABLED io=read-write
    assoc: vol=vol05 sd=v5_01-04,v5_01-01,v5_02-04,v5_02-01,v5_03-04,...
    flags:

    Subdisk: v5_01-04
    info: disk=v5_01 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=0 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_01-01
    info: disk=v5_01 offset=232968960 len=53752320
    assoc: vol=vol05 plex=vol05-01 (column=0 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_02-04
    info: disk=v5_02 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=1 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_02-01
    info: disk=v5_02 offset=232968960 len=53752320
    assoc: vol=vol05 plex=vol05-01 (column=1 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_03-04
    info: disk=v5_03 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=2 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_03-01
    info: disk=v5_03 offset=232968960 len=53752320
    assoc: vol=vol05 plex=vol05-01 (column=2 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_04-04
    info: disk=v5_04 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=3 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_04-01
    info: disk=v5_04 offset=232968960 len=53752320
    assoc: vol=vol05 plex=vol05-01 (column=3 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_05-04
    info: disk=v5_05 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=4 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_05-01
    info: disk=v5_05 offset=232968960 len=53752320
    assoc: vol=vol05 plex=vol05-01 (column=4 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_06-04
    info: disk=v5_06 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=5 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_06-01
    info: disk=v5_06 offset=232968960 len=53752320
    assoc: vol=vol05 plex=vol05-01 (column=5 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_07-04
    info: disk=v5_07 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=6 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_07-01
    info: disk=v5_07 offset=232968960 len=53752320
    assoc: vol=vol05 plex=vol05-01 (column=6 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_08-04
    info: disk=v5_08 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=7 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_08-01
    info: disk=v5_08 offset=232968960 len=53752320
    assoc: vol=vol05 plex=vol05-01 (column=7 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_09-04
    info: disk=v5_09 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=8 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_09-01
    info: disk=v5_09 offset=232968960 len=53752320
    assoc: vol=vol05 plex=vol05-01 (column=8 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_10-04
    info: disk=v5_10 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=9 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_10-01
    info: disk=v5_10 offset=232968960 len=53752320
    assoc: vol=vol05 plex=vol05-01 (column=9 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_11-04
    info: disk=v5_11 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=10 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_11-01
    info: disk=v5_11 offset=232968960 len=53752320
    assoc: vol=vol05 plex=vol05-01 (column=10 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_12-04
    info: disk=v5_12 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=11 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_12-01
    info: disk=v5_12 offset=232968960 len=53752320
    assoc: vol=vol05 plex=vol05-01 (column=11 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_13-05
    info: disk=v5_13 offset=1922820 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=12 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_13-01
    info: disk=v5_13 offset=234895020 len=52130760
    assoc: vol=vol05 plex=vol05-01 (column=12 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_13-02
    info: disk=v5_13 offset=0 len=1620600
    assoc: vol=vol05 plex=vol05-01 (column=12 offset=285098728)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_14-05
    info: disk=v5_14 offset=1922820 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=13 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_14-01
    info: disk=v5_14 offset=234895020 len=52130760
    assoc: vol=vol05 plex=vol05-01 (column=13 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_14-02
    info: disk=v5_14 offset=0 len=1620600
    assoc: vol=vol05 plex=vol05-01 (column=13 offset=285098728)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_15-02
    info: disk=v5_15 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=14 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_15-01
    info: disk=v5_15 offset=232972584 len=53749960
    assoc: vol=vol05 plex=vol05-01 (column=14 offset=232967968)

    Subdisk: v5_15-02
    info: disk=v5_15 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=14 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_15-01
    info: disk=v5_15 offset=232972584 len=53749960
    assoc: vol=vol05 plex=vol05-01 (column=14 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_16-02
    info: disk=v5_16 offset=0 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=15 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_16-01
    info: disk=v5_16 offset=232972584 len=53749960
    assoc: vol=vol05 plex=vol05-01 (column=15 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_17-03
    info: disk=v5_17 offset=1585640 len=232967968
    assoc: vol=vol05 plex=vol05-01 (column=16 offset=0)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_17-01
    info: disk=v5_17 offset=234558224 len=52164320
    assoc: vol=vol05 plex=vol05-01 (column=16 offset=232967968)
    flags: disabled iofail
    device: device=(no device)


    Subdisk: v5_17-02
    info: disk=v5_17 offset=0 len=1585640
    assoc: vol=vol05 plex=vol05-01 (column=16 offset=285132288)
    flags: disabled iofail
    device: device=(no device)


    ---------------------------------------------------------------------------------

    Disk group: icebox

    DG NAME NCONFIG NLOG MINORS GROUP-ID
    DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
    RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
    RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
    V NAME RVG KSTATE STATE LENGTH READPOL PREFPLEX
    UTYPE
    PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID
    MODE
    SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE
    MODE
    SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM
    MODE
    DC NAME PARENTVOL LOGVOL
    SP NAME SNAPVOL DCO

    dg icebox default default 48000 1035323826.1295.hound

    dm v5_01 - - - - NODEVICE
    dm v5_02 - - - - NODEVICE
    dm v5_03 - - - - NODEVICE
    dm v5_04 - - - - NODEVICE
    dm v5_05 - - - - NODEVICE
    dm v5_06 - - - - NODEVICE
    dm v5_07 - - - - NODEVICE
    dm v5_08 - - - - NODEVICE
    dm v5_09 - - - - NODEVICE
    dm v5_10 - - - - NODEVICE
    dm v5_11 - - - - NODEVICE
    dm v5_12 - - - - NODEVICE
    dm v5_13 - - - - NODEVICE
    dm v5_14 - - - - NODEVICE
    dm v5_15 - - - - NODEVICE
    dm v5_16 - - - - NODEVICE
    dm v5_17 - - - - NODEVICE


    v vol05 - DISABLED ACTIVE 4587485696 RAID -
    raid5
    pl vol05-01 vol05 DISABLED ACTIVE 0 RAID 17/32
    RW
    sd v5_01-04 vol05-01 v5_01 0 232967968 0/0 -
    NDEV
    sd v5_01-01 vol05-01 v5_01 232968960 53752320 0/232967968 -
    NDEV
    sd v5_02-04 vol05-01 v5_02 0 232967968 1/0 -
    NDEV
    sd v5_02-01 vol05-01 v5_02 232968960 53752320 1/232967968 -
    NDEV
    sd v5_03-04 vol05-01 v5_03 0 232967968 2/0 -
    NDEV
    sd v5_03-01 vol05-01 v5_03 232968960 53752320 2/232967968 -
    NDEV
    sd v5_04-04 vol05-01 v5_04 0 232967968 3/0 -
    NDEV
    sd v5_04-01 vol05-01 v5_04 232968960 53752320 3/232967968 -
    NDEV
    sd v5_05-04 vol05-01 v5_05 0 232967968 4/0 -
    NDEV
    sd v5_05-01 vol05-01 v5_05 232968960 53752320 4/232967968 -
    NDEV
    sd v5_06-04 vol05-01 v5_06 0 232967968 5/0 -
    NDEV
    sd v5_06-01 vol05-01 v5_06 232968960 53752320 5/232967968 -
    NDEV
    sd v5_07-04 vol05-01 v5_07 0 232967968 6/0 -
    NDEV
    sd v5_07-01 vol05-01 v5_07 232968960 53752320 6/232967968 -
    NDEV
    sd v5_08-04 vol05-01 v5_08 0 232967968 7/0 -
    NDEV
    sd v5_08-01 vol05-01 v5_08 232968960 53752320 7/232967968 -
    NDEV
    sd v5_09-04 vol05-01 v5_09 0 232967968 8/0 -
    NDEV
    sd v5_09-01 vol05-01 v5_09 232968960 53752320 8/232967968 -
    NDEV
    sd v5_10-04 vol05-01 v5_10 0 232967968 9/0 -
    NDEV
    sd v5_10-01 vol05-01 v5_10 232968960 53752320 9/232967968 -
    NDEV
    sd v5_11-04 vol05-01 v5_11 0 232967968 10/0 -
    NDEV
    sd v5_11-01 vol05-01 v5_11 232968960 53752320 10/232967968 -
    NDEV
    sd v5_12-04 vol05-01 v5_12 0 232967968 11/0 -
    NDEV
    sd v5_12-01 vol05-01 v5_12 232968960 53752320 11/232967968 -
    NDEV
    sd v5_13-05 vol05-01 v5_13 1922820 232967968 12/0 -
    NDEV
    sd v5_13-01 vol05-01 v5_13 234895020 52130760 12/232967968 -
    NDEV
    sd v5_13-02 vol05-01 v5_13 0 1620600 12/285098728 -
    NDEV
    sd v5_14-05 vol05-01 v5_14 1922820 232967968 13/0 -
    NDEV
    sd v5_14-01 vol05-01 v5_14 234895020 52130760 13/232967968 -
    NDEV
    sd v5_14-02 vol05-01 v5_14 0 1620600 13/285098728 -
    NDEV
    sd v5_15-02 vol05-01 v5_15 0 232967968 14/0 -
    NDEV
    sd v5_15-01 vol05-01 v5_15 232972584 53749960 14/232967968 -
    NDEV
    sd v5_16-02 vol05-01 v5_16 0 232967968 15/0 -
    NDEV
    sd v5_16-01 vol05-01 v5_16 232972584 53749960 15/232967968 -
    NDEV
    sd v5_17-03 vol05-01 v5_17 1585640 232967968 16/0 -
    NDEV
    sd v5_17-01 vol05-01 v5_17 234558224 52164320 16/232967968 -
    NDEV
    sd v5_17-02 vol05-01 v5_17 0 1585640 16/285132288 -
    NDEV




    Me wrote:
    >OK, let me explain something.
    >
    >The volume information is NOT stored on the host, but on the disks.
    >
    >This means that is you delete (remove) volumes on a machine, it actually


    >removes all traces from the disks (the data might still be there, but
    >the Volume Manager configuration that knows how to access the data is
    >gone). So, unless you have an old vxprint or saved your configuration,
    >sorry, old volumes are gone.
    >
    >On the new machine you should have just NOT started the volumes you did


    >not need. This would have prevented access, but not remove them from the


    >configuration permanently.
    >
    >OK, so now, how to get it back ?
    >
    >Hope you have a old vxprint output, or if this is VM 4.0 or later, the
    >diskgrou info will still be in the /etc/vx/cbr directory. Will try and
    >help you to get it back if you can post either here
    >
    >Mike Kuriger wrote:
    >> I had several volumes in a volume group. I stopped one of the volumes

    and
    >> moved it to another host and did an import. The volume showed up on the
    >> new host just fine, but so did all the other volumes that I dont need

    on
    >> the new host so I deleted them. well, a couple days go by and I have

    to
    >> put that volume back on the original host. I moved the disks back and

    the
    >> volume reappeared on the original host, however I lost all my other volumes.
    >> Is there any way I can get those back? the disks are fine, but when

    I do
    >> a vxdisk list c1t4d0s0 (for example) it still knows it's in that volume

    group
    >> but it has nothing in the name field.
    >>
    >> any help would be greatly appreciated!



  4. Re: lost my volume group


    Hey guys/gals. I was able to fix this with the following procedure:

    1. I re-added all the disks to vxvm control (reinitialize) using the same
    name they had before.

    2. ran the following command to convert my saved vxprint output to a different
    format.
    (the name of my lost volume is vol05)

    cat /tmp/vxprintoutput.xtx | vxprint -D - -mpvsqQr vol05 > /tmp/vol05

    3. ran the following command to re-make the volume from the old configuration
    file:

    vxmake -g icebox -d /tmp/vol05

    4. start the volume

    vxvol -g icebox start vol05

    5. fsck the volume (it was still in the clean state)

    6. mount the volume. Here is my vxprint -ht now:

    v vol05 - ENABLED SYNC 4587485696 RAID -
    raid5
    pl vol05-01 vol05 ENABLED ACTIVE 4587524608 RAID 17/32
    RW
    sd v5_01-04 vol05-01 v5_01 0 232967968 0/0 c12t0d0
    ENA
    sd v5_01-01 vol05-01 v5_01 232968960 53752320 0/232967968 c12t0d0
    ENA
    sd v5_02-04 vol05-01 v5_02 0 232967968 1/0 c12t1d0
    ENA
    sd v5_02-01 vol05-01 v5_02 232968960 53752320 1/232967968 c12t1d0
    ENA
    sd v5_03-04 vol05-01 v5_03 0 232967968 2/0 c12t2d0
    ENA
    sd v5_03-01 vol05-01 v5_03 232968960 53752320 2/232967968 c12t2d0
    ENA
    sd v5_04-04 vol05-01 v5_04 0 232967968 3/0 c12t3d0
    ENA
    sd v5_04-01 vol05-01 v5_04 232968960 53752320 3/232967968 c12t3d0
    ENA
    sd v5_05-04 vol05-01 v5_05 0 232967968 4/0 c13t0d0
    ENA
    sd v5_05-01 vol05-01 v5_05 232968960 53752320 4/232967968 c13t0d0
    ENA
    sd v5_06-04 vol05-01 v5_06 0 232967968 5/0 c13t1d0
    ENA
    sd v5_06-01 vol05-01 v5_06 232968960 53752320 5/232967968 c13t1d0
    ENA
    sd v5_07-04 vol05-01 v5_07 0 232967968 6/0 c13t2d0
    ENA
    sd v5_07-01 vol05-01 v5_07 232968960 53752320 6/232967968 c13t2d0
    ENA
    sd v5_08-04 vol05-01 v5_08 0 232967968 7/0 c13t3d0
    ENA
    sd v5_08-01 vol05-01 v5_08 232968960 53752320 7/232967968 c13t3d0
    ENA
    sd v5_09-04 vol05-01 v5_09 0 232967968 8/0 c14t0d0
    ENA
    sd v5_09-01 vol05-01 v5_09 232968960 53752320 8/232967968 c14t0d0
    ENA
    sd v5_10-04 vol05-01 v5_10 0 232967968 9/0 c14t1d0
    ENA
    sd v5_10-01 vol05-01 v5_10 232968960 53752320 9/232967968 c14t1d0
    ENA
    sd v5_11-04 vol05-01 v5_11 0 232967968 10/0 c14t2d0
    ENA
    sd v5_11-01 vol05-01 v5_11 232968960 53752320 10/232967968 c14t2d0
    ENA
    sd v5_12-04 vol05-01 v5_12 0 232967968 11/0 c14t3d0
    ENA
    sd v5_12-01 vol05-01 v5_12 232968960 53752320 11/232967968 c14t3d0
    ENA
    sd v5_13-05 vol05-01 v5_13 1922820 232967968 12/0 c15t1d0
    ENA
    sd v5_13-01 vol05-01 v5_13 234895020 52130760 12/232967968 c15t1d0
    ENA
    sd v5_13-02 vol05-01 v5_13 0 1620600 12/285098728 c15t1d0
    ENA
    sd v5_14-05 vol05-01 v5_14 1922820 232967968 13/0 c15t0d0
    ENA
    sd v5_14-01 vol05-01 v5_14 234895020 52130760 13/232967968 c15t0d0
    ENA
    sd v5_14-02 vol05-01 v5_14 0 1620600 13/285098728 c15t0d0
    ENA
    sd v5_15-02 vol05-01 v5_15 0 232967968 14/0 c15t2d0
    ENA
    sd v5_15-01 vol05-01 v5_15 232972584 53749960 14/232967968 c15t2d0
    ENA
    sd v5_16-02 vol05-01 v5_16 0 232967968 15/0 c15t3d0
    ENA
    sd v5_16-01 vol05-01 v5_16 232972584 53749960 15/232967968 c15t3d0
    ENA
    sd v5_17-03 vol05-01 v5_17 1585640 232967968 16/0 c0t3d0
    ENA
    sd v5_17-01 vol05-01 v5_17 234558224 52164320 16/232967968 c0t3d0
    ENA
    sd v5_17-02 vol05-01 v5_17 0 1585640 16/285132288 c0t3d0
    ENA


    thanks!
    ~mike


    "Mike Kuriger" wrote:
    >
    >I was able to get a copy of vxprint by importing the volume group on another
    >host with some disks I removed last week, so with this information I'm hoping
    >I can rebuild these subdisks and plex and volume manually. thanks so much
    >for the help! I'll just paste in the information for the volume I'm interested
    >in getting back.
    >
    >~mike~
    >
    >Disk group: icebox
    >
    >Group: icebox
    >info: dgid=1035323826.1295.hound
    >version: 90
    >activation: read-write
    >detach-policy: global
    >copies: nconfig=default nlog=default
    >minors: >= 48000
    >
    >Disk: v5_01
    >info: diskid=1077056775.2684.hound
    >assoc: nodevice (last known device: c16t4d0s2)
    >
    >Disk: v5_02
    >info: diskid=1077056791.2692.hound
    >assoc: nodevice (last known device: c3t4d0s2)
    >
    >Disk: v5_03
    >info: diskid=1077056794.2694.hound
    >assoc: nodevice (last known device: c4t4d0s2)
    >
    >Disk: v5_04
    >info: diskid=1077056797.2696.hound
    >assoc: nodevice (last known device: c19t4d0s2)
    >
    >Disk: v5_05
    >info: diskid=1077056800.2698.hound
    >assoc: nodevice (last known device: c18t4d0s2)
    >
    >Disk: v5_06
    >info: diskid=1077056803.2700.hound
    >assoc: nodevice (last known device: c19t8d0s2)
    >
    >Disk: v5_07
    >info: diskid=1077056806.2702.hound
    >assoc: nodevice (last known device: c18t8d0s2)
    >
    >Disk: v5_08
    >info: diskid=1077056810.2704.hound
    >assoc: nodevice (last known device: c16t8d0s2)
    >
    >Disk: v5_09
    >info: diskid=1077056781.2686.hound
    >assoc: nodevice (last known device: c19t9d0s2)
    >
    >Disk: v5_10
    >info: diskid=1077056768.2682.hound
    >assoc: nodevice (last known device: c18t9d0s2)
    >
    >Disk: v5_11
    >info: diskid=1077056784.2688.hound
    >assoc: nodevice (last known device: c16t9d0s2)
    >
    >Disk: v5_12
    >info: diskid=1077056787.2690.hound
    >assoc: nodevice (last known device: c19t10d0s2)
    >
    >Disk: v5_13
    >info: diskid=1121802238.3449.hound
    >assoc: nodevice (last known device: c18t5d0s2)
    >
    >Disk: v5_14
    >info: diskid=1121802234.3447.hound
    >assoc: nodevice (last known device: c18t10d0s2)
    >
    >Disk: v5_15
    >info: diskid=1126552615.3550.hound
    >assoc: nodevice (last known device: c16t5d0s2)
    >
    >Disk: v5_16
    >info: diskid=1126552619.3552.hound
    >assoc: nodevice (last known device: c18t13d0s2)
    >
    >Disk: v5_17
    >info: diskid=1126552623.3554.hound
    >assoc: nodevice (last known device: c3t12d0s2)
    >
    >
    >
    >Volume: vol05
    >info: len=4587485696
    >type: usetype=raid5
    >state: state=ACTIVE kernel=DISABLED cdsrecovery=0/0 (clean)
    >assoc: plexes=vol05-01
    >policies: read=RAID exceptions=NO_OP
    >flags: closed degraded unusable writeback
    >logging: type=RAID5 loglen=0 serial=0/0 (disabled)
    >apprecov: seqno=0/0
    >recovery: mode=UNKNOWN
    >recov_id=0
    >device: minor=48006 bdev=209/48006 cdev=209/48006 path=/dev/vx/dsk/icebox/vol05
    >perms: user=root group=root mode=0600
    >
    >Plex: vol05-01
    >info: len=0 (sparse)
    >type: layout=RAID columns=17 width=32
    >state: state=ACTIVE kernel=DISABLED io=read-write
    >assoc: vol=vol05 sd=v5_01-04,v5_01-01,v5_02-04,v5_02-01,v5_03-04,...
    >flags:
    >
    >Subdisk: v5_01-04
    >info: disk=v5_01 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=0 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_01-01
    >info: disk=v5_01 offset=232968960 len=53752320
    >assoc: vol=vol05 plex=vol05-01 (column=0 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_02-04
    >info: disk=v5_02 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=1 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_02-01
    >info: disk=v5_02 offset=232968960 len=53752320
    >assoc: vol=vol05 plex=vol05-01 (column=1 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_03-04
    >info: disk=v5_03 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=2 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_03-01
    >info: disk=v5_03 offset=232968960 len=53752320
    >assoc: vol=vol05 plex=vol05-01 (column=2 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_04-04
    >info: disk=v5_04 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=3 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_04-01
    >info: disk=v5_04 offset=232968960 len=53752320
    >assoc: vol=vol05 plex=vol05-01 (column=3 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_05-04
    >info: disk=v5_05 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=4 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_05-01
    >info: disk=v5_05 offset=232968960 len=53752320
    >assoc: vol=vol05 plex=vol05-01 (column=4 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_06-04
    >info: disk=v5_06 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=5 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_06-01
    >info: disk=v5_06 offset=232968960 len=53752320
    >assoc: vol=vol05 plex=vol05-01 (column=5 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_07-04
    >info: disk=v5_07 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=6 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_07-01
    >info: disk=v5_07 offset=232968960 len=53752320
    >assoc: vol=vol05 plex=vol05-01 (column=6 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_08-04
    >info: disk=v5_08 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=7 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_08-01
    >info: disk=v5_08 offset=232968960 len=53752320
    >assoc: vol=vol05 plex=vol05-01 (column=7 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_09-04
    >info: disk=v5_09 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=8 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_09-01
    >info: disk=v5_09 offset=232968960 len=53752320
    >assoc: vol=vol05 plex=vol05-01 (column=8 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_10-04
    >info: disk=v5_10 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=9 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_10-01
    >info: disk=v5_10 offset=232968960 len=53752320
    >assoc: vol=vol05 plex=vol05-01 (column=9 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_11-04
    >info: disk=v5_11 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=10 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_11-01
    >info: disk=v5_11 offset=232968960 len=53752320
    >assoc: vol=vol05 plex=vol05-01 (column=10 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_12-04
    >info: disk=v5_12 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=11 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_12-01
    >info: disk=v5_12 offset=232968960 len=53752320
    >assoc: vol=vol05 plex=vol05-01 (column=11 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_13-05
    >info: disk=v5_13 offset=1922820 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=12 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_13-01
    >info: disk=v5_13 offset=234895020 len=52130760
    >assoc: vol=vol05 plex=vol05-01 (column=12 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_13-02
    >info: disk=v5_13 offset=0 len=1620600
    >assoc: vol=vol05 plex=vol05-01 (column=12 offset=285098728)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_14-05
    >info: disk=v5_14 offset=1922820 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=13 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_14-01
    >info: disk=v5_14 offset=234895020 len=52130760
    >assoc: vol=vol05 plex=vol05-01 (column=13 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_14-02
    >info: disk=v5_14 offset=0 len=1620600
    >assoc: vol=vol05 plex=vol05-01 (column=13 offset=285098728)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_15-02
    >info: disk=v5_15 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=14 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_15-01
    >info: disk=v5_15 offset=232972584 len=53749960
    >assoc: vol=vol05 plex=vol05-01 (column=14 offset=232967968)
    >
    >Subdisk: v5_15-02
    >info: disk=v5_15 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=14 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_15-01
    >info: disk=v5_15 offset=232972584 len=53749960
    >assoc: vol=vol05 plex=vol05-01 (column=14 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_16-02
    >info: disk=v5_16 offset=0 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=15 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_16-01
    >info: disk=v5_16 offset=232972584 len=53749960
    >assoc: vol=vol05 plex=vol05-01 (column=15 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_17-03
    >info: disk=v5_17 offset=1585640 len=232967968
    >assoc: vol=vol05 plex=vol05-01 (column=16 offset=0)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_17-01
    >info: disk=v5_17 offset=234558224 len=52164320
    >assoc: vol=vol05 plex=vol05-01 (column=16 offset=232967968)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >Subdisk: v5_17-02
    >info: disk=v5_17 offset=0 len=1585640
    >assoc: vol=vol05 plex=vol05-01 (column=16 offset=285132288)
    >flags: disabled iofail
    >device: device=(no device)
    >
    >
    >---------------------------------------------------------------------------------
    >
    >Disk group: icebox
    >
    >DG NAME NCONFIG NLOG MINORS GROUP-ID
    >DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
    >RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
    >RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
    >V NAME RVG KSTATE STATE LENGTH READPOL PREFPLEX
    >UTYPE
    >PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID
    >MODE
    >SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE
    > MODE
    >SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM


    > MODE
    >DC NAME PARENTVOL LOGVOL
    >SP NAME SNAPVOL DCO
    >
    >dg icebox default default 48000 1035323826.1295.hound
    >
    >dm v5_01 - - - - NODEVICE
    >dm v5_02 - - - - NODEVICE
    >dm v5_03 - - - - NODEVICE
    >dm v5_04 - - - - NODEVICE
    >dm v5_05 - - - - NODEVICE
    >dm v5_06 - - - - NODEVICE
    >dm v5_07 - - - - NODEVICE
    >dm v5_08 - - - - NODEVICE
    >dm v5_09 - - - - NODEVICE
    >dm v5_10 - - - - NODEVICE
    >dm v5_11 - - - - NODEVICE
    >dm v5_12 - - - - NODEVICE
    >dm v5_13 - - - - NODEVICE
    >dm v5_14 - - - - NODEVICE
    >dm v5_15 - - - - NODEVICE
    >dm v5_16 - - - - NODEVICE
    >dm v5_17 - - - - NODEVICE
    >
    >
    >v vol05 - DISABLED ACTIVE 4587485696 RAID -


    > raid5
    >pl vol05-01 vol05 DISABLED ACTIVE 0 RAID 17/32


    > RW
    >sd v5_01-04 vol05-01 v5_01 0 232967968 0/0 -


    > NDEV
    >sd v5_01-01 vol05-01 v5_01 232968960 53752320 0/232967968 -


    > NDEV
    >sd v5_02-04 vol05-01 v5_02 0 232967968 1/0 -


    > NDEV
    >sd v5_02-01 vol05-01 v5_02 232968960 53752320 1/232967968 -


    > NDEV
    >sd v5_03-04 vol05-01 v5_03 0 232967968 2/0 -


    > NDEV
    >sd v5_03-01 vol05-01 v5_03 232968960 53752320 2/232967968 -


    > NDEV
    >sd v5_04-04 vol05-01 v5_04 0 232967968 3/0 -


    > NDEV
    >sd v5_04-01 vol05-01 v5_04 232968960 53752320 3/232967968 -


    > NDEV
    >sd v5_05-04 vol05-01 v5_05 0 232967968 4/0 -


    > NDEV
    >sd v5_05-01 vol05-01 v5_05 232968960 53752320 4/232967968 -


    > NDEV
    >sd v5_06-04 vol05-01 v5_06 0 232967968 5/0 -


    > NDEV
    >sd v5_06-01 vol05-01 v5_06 232968960 53752320 5/232967968 -


    > NDEV
    >sd v5_07-04 vol05-01 v5_07 0 232967968 6/0 -


    > NDEV
    >sd v5_07-01 vol05-01 v5_07 232968960 53752320 6/232967968 -


    > NDEV
    >sd v5_08-04 vol05-01 v5_08 0 232967968 7/0 -


    > NDEV
    >sd v5_08-01 vol05-01 v5_08 232968960 53752320 7/232967968 -


    > NDEV
    >sd v5_09-04 vol05-01 v5_09 0 232967968 8/0 -


    > NDEV
    >sd v5_09-01 vol05-01 v5_09 232968960 53752320 8/232967968 -


    > NDEV
    >sd v5_10-04 vol05-01 v5_10 0 232967968 9/0 -


    > NDEV
    >sd v5_10-01 vol05-01 v5_10 232968960 53752320 9/232967968 -


    > NDEV
    >sd v5_11-04 vol05-01 v5_11 0 232967968 10/0 -


    > NDEV
    >sd v5_11-01 vol05-01 v5_11 232968960 53752320 10/232967968 -


    > NDEV
    >sd v5_12-04 vol05-01 v5_12 0 232967968 11/0 -


    > NDEV
    >sd v5_12-01 vol05-01 v5_12 232968960 53752320 11/232967968 -


    > NDEV
    >sd v5_13-05 vol05-01 v5_13 1922820 232967968 12/0 -


    > NDEV
    >sd v5_13-01 vol05-01 v5_13 234895020 52130760 12/232967968 -


    > NDEV
    >sd v5_13-02 vol05-01 v5_13 0 1620600 12/285098728 -


    > NDEV
    >sd v5_14-05 vol05-01 v5_14 1922820 232967968 13/0 -


    > NDEV
    >sd v5_14-01 vol05-01 v5_14 234895020 52130760 13/232967968 -


    > NDEV
    >sd v5_14-02 vol05-01 v5_14 0 1620600 13/285098728 -


    > NDEV
    >sd v5_15-02 vol05-01 v5_15 0 232967968 14/0 -


    > NDEV
    >sd v5_15-01 vol05-01 v5_15 232972584 53749960 14/232967968 -


    > NDEV
    >sd v5_16-02 vol05-01 v5_16 0 232967968 15/0 -


    > NDEV
    >sd v5_16-01 vol05-01 v5_16 232972584 53749960 15/232967968 -


    > NDEV
    >sd v5_17-03 vol05-01 v5_17 1585640 232967968 16/0 -


    > NDEV
    >sd v5_17-01 vol05-01 v5_17 234558224 52164320 16/232967968 -


    > NDEV
    >sd v5_17-02 vol05-01 v5_17 0 1585640 16/285132288 -


    > NDEV
    >
    >
    >
    >
    >Me wrote:
    >>OK, let me explain something.
    >>
    >>The volume information is NOT stored on the host, but on the disks.
    >>
    >>This means that is you delete (remove) volumes on a machine, it actually

    >
    >>removes all traces from the disks (the data might still be there, but
    >>the Volume Manager configuration that knows how to access the data is
    >>gone). So, unless you have an old vxprint or saved your configuration,


    >>sorry, old volumes are gone.
    >>
    >>On the new machine you should have just NOT started the volumes you did

    >
    >>not need. This would have prevented access, but not remove them from the

    >
    >>configuration permanently.
    >>
    >>OK, so now, how to get it back ?
    >>
    >>Hope you have a old vxprint output, or if this is VM 4.0 or later, the


    >>diskgrou info will still be in the /etc/vx/cbr directory. Will try and


    >>help you to get it back if you can post either here
    >>
    >>Mike Kuriger wrote:
    >>> I had several volumes in a volume group. I stopped one of the volumes

    >and
    >>> moved it to another host and did an import. The volume showed up on

    the
    >>> new host just fine, but so did all the other volumes that I dont need

    >on
    >>> the new host so I deleted them. well, a couple days go by and I have

    >to
    >>> put that volume back on the original host. I moved the disks back and

    >the
    >>> volume reappeared on the original host, however I lost all my other volumes.
    >>> Is there any way I can get those back? the disks are fine, but when

    >I do
    >>> a vxdisk list c1t4d0s0 (for example) it still knows it's in that volume

    >group
    >>> but it has nothing in the name field.
    >>>
    >>> any help would be greatly appreciated!

    >



+ Reply to Thread