raid problem? - BSD

This is a discussion on raid problem? - BSD ; Hello, I have setup 2 HP DL360 G5 with serial ATA drives. Both are setup with openbsd 4.0 and disk mirror by Raidframe. Now almost everytime when i reboot the machine(they are not in production yet) i get directly after ...

+ Reply to Thread
Results 1 to 7 of 7

Thread: raid problem?

  1. raid problem?

    Hello,

    I have setup 2 HP DL360 G5 with serial ATA drives.

    Both are setup with openbsd 4.0 and disk mirror by Raidframe.
    Now almost everytime when i reboot the machine(they are not in
    production yet) i get directly after the reboot command a panic.

    Now i was reading on the internet that is not recommended to have swap
    on a mirror or the same mirror as the rest.
    So after the reboot now the system always comes up with a dirty paritity
    and starts building again(takes a while to complete).
    Also i see the message savecore: no dump device configured. Why is that ?

    The question: Is this normal when swap is not outside the raid set or on
    the same raid set as the rest ?

    Thanks,

    Peter

  2. Re: raid problem?

    On Thu, 15 Feb 2007 20:32:42 +0100, Peter van Oord van der Vlies wrote:

    > Hello,
    >
    > I have setup 2 HP DL360 G5 with serial ATA drives.
    >
    > Both are setup with openbsd 4.0 and disk mirror by Raidframe.
    > Now almost everytime when i reboot the machine(they are not in
    > production yet) i get directly after the reboot command a panic.
    >
    > Now i was reading on the internet that is not recommended to have swap
    > on a mirror or the same mirror as the rest.
    > So after the reboot now the system always comes up with a dirty paritity
    > and starts building again(takes a while to complete).
    > Also i see the message savecore: no dump device configured. Why is that ?
    >
    > The question: Is this normal when swap is not outside the raid set or on
    > the same raid set as the rest ?


    Peter,

    A little information on your configuration will go a long way towards
    helping you.

    It appears, from your description, that you are getting a repeatable
    panic. Nobody is going to be able to help you with that, unless you are
    able to provide minimal information: what the panic is, what the backtrace
    shows, etc.

    I can answer the dump device question, as I had already investigated it.
    You can see the details by reading misc@ archives. In brief: When you
    have root on RAID, you cannot dump core. It doesn't matter if
    your swap is in a RAID set or on a standard device.

    As to your continuing problem with "dirty" areas after reboot, it could
    certainly be the panic. If you are able to shutdown and boot without
    panic, but still going through parity rebuilding, then you have a
    configuration problem.

    Post the output of disklabel for all disks (real and RAIDFrame), and the
    output for fdisk for each disk if your architecture uses MBR.

    --
    Replying directly will get you locally blacklisted.
    Change the address; use my first name in front of the @ if you want to
    communicate privately.


  3. Re: raid problem?

    Josh Grosse wrote:
    > On Thu, 15 Feb 2007 20:32:42 +0100, Peter van Oord van der Vlies wrote:
    >
    >> Hello,
    >>
    >> I have setup 2 HP DL360 G5 with serial ATA drives.
    >>
    >> Both are setup with openbsd 4.0 and disk mirror by Raidframe.
    >> Now almost everytime when i reboot the machine(they are not in
    >> production yet) i get directly after the reboot command a panic.
    >>
    >> Now i was reading on the internet that is not recommended to have swap
    >> on a mirror or the same mirror as the rest.
    >> So after the reboot now the system always comes up with a dirty paritity
    >> and starts building again(takes a while to complete).
    >> Also i see the message savecore: no dump device configured. Why is that ?
    >>
    >> The question: Is this normal when swap is not outside the raid set or on
    >> the same raid set as the rest ?

    >
    > Peter,
    >
    > A little information on your configuration will go a long way towards
    > helping you.
    >
    > It appears, from your description, that you are getting a repeatable
    > panic. Nobody is going to be able to help you with that, unless you are
    > able to provide minimal information: what the panic is, what the backtrace
    > shows, etc.
    >
    > I can answer the dump device question, as I had already investigated it.
    > You can see the details by reading misc@ archives. In brief: When you
    > have root on RAID, you cannot dump core. It doesn't matter if
    > your swap is in a RAID set or on a standard device.


    >
    > As to your continuing problem with "dirty" areas after reboot, it could
    > certainly be the panic. If you are able to shutdown and boot without
    > panic, but still going through parity rebuilding, then you have a
    > configuration problem.
    >
    > Post the output of disklabel for all disks (real and RAIDFrame), and the
    > output for fdisk for each disk if your architecture uses MBR.
    >


    Thank you josh,

    Tommorow i am back at my work and will post the disklayout and the dump.
    I had no time today to write down the trace, that's why i was posting
    without the information. And the systems have no connection to the
    internet yet .

    For your information the dirty areas are only when the systems had the
    panic. So when the reboot responds normal the system will boot normal.

    Peter

  4. Re: raid problem?

    Josh Grosse wrote:
    > On Thu, 15 Feb 2007 20:32:42 +0100, Peter van Oord van der Vlies wrote:
    >
    >> Hello,
    >>
    >> I have setup 2 HP DL360 G5 with serial ATA drives.
    >>
    >> Both are setup with openbsd 4.0 and disk mirror by Raidframe.
    >> Now almost everytime when i reboot the machine(they are not in
    >> production yet) i get directly after the reboot command a panic.
    >>
    >> Now i was reading on the internet that is not recommended to have swap
    >> on a mirror or the same mirror as the rest.
    >> So after the reboot now the system always comes up with a dirty paritity
    >> and starts building again(takes a while to complete).
    >> Also i see the message savecore: no dump device configured. Why is that ?
    >>
    >> The question: Is this normal when swap is not outside the raid set or on
    >> the same raid set as the rest ?

    >
    > Peter,
    >
    > A little information on your configuration will go a long way towards
    > helping you.
    >
    > It appears, from your description, that you are getting a repeatable
    > panic. Nobody is going to be able to help you with that, unless you are
    > able to provide minimal information: what the panic is, what the backtrace
    > shows, etc.
    >
    > I can answer the dump device question, as I had already investigated it.
    > You can see the details by reading misc@ archives. In brief: When you
    > have root on RAID, you cannot dump core. It doesn't matter if
    > your swap is in a RAID set or on a standard device.
    >
    > As to your continuing problem with "dirty" areas after reboot, it could
    > certainly be the panic. If you are able to shutdown and boot without
    > panic, but still going through parity rebuilding, then you have a
    > configuration problem.
    >
    > Post the output of disklabel for all disks (real and RAIDFrame), and the
    > output for fdisk for each disk if your architecture uses MBR.
    >


    I still had a swap on both disks so i use that one now and disabled the
    swap on raid0.
    It seems to work..

    Here is the information you want(the fault and trace is below):

    # disklabel -E wd0
    # Inside MBR partition 3: type A6 start 63 size 156296322

    Treating sectors 63-156296385 as the OpenBSD portion of the disk.
    You can use the 'b' command to change this.

    Initial label editor (enter '?' for help at any prompt)
    > p

    device: /dev/rwd0c
    type: ESDI
    disk: ESDI/IDE disk
    label: ST3808110AS
    bytes/sector: 512
    sectors/track: 63
    tracks/cylinder: 16
    sectors/cylinder: 1008
    cylinders: 16383
    total sectors: 156301488
    free sectors: 0
    rpm: 3600

    16 partitions:
    # size offset fstype [fsize bsize cpg]
    a: 2097585 63 4.2BSD 2048 16384 328 # Cyl
    0*- 2080
    b: 4194288 2097648 swap # Cyl 2081
    - 6241
    c: 156301488 0 unused 0 0 # Cyl 0
    -155060
    d: 150004449 6291936 RAID # Cyl 6242
    -155055*

    # disklabel -E wd1
    # Inside MBR partition 3: type A6 start 63 size 156296322

    Treating sectors 63-156296385 as the OpenBSD portion of the disk.
    You can use the 'b' command to change this.

    Initial label editor (enter '?' for help at any prompt)
    > p

    device: /dev/rwd1c
    type: ESDI
    disk: ESDI/IDE disk
    label: ST3808110AS
    bytes/sector: 512
    sectors/track: 63
    tracks/cylinder: 16
    sectors/cylinder: 1008
    cylinders: 16383
    total sectors: 156301488
    free sectors: 0
    rpm: 3600

    16 partitions:
    # size offset fstype [fsize bsize cpg]
    a: 2097585 63 4.2BSD 2048 16384 328 # Cyl
    0*- 2080
    b: 4194288 2097648 swap # Cyl 2081
    - 6241
    c: 156301488 0 unused 0 0 # Cyl 0
    -155060
    d: 150004449 6291936 RAID # Cyl 6242
    -155055*

    # raidctl -sv raid0
    raid0 Components:
    /dev/wd0d: optimal
    /dev/wd1d: optimal
    No spares.
    Component label for /dev/wd0d:
    Row: 0, Column: 0, Num Rows: 1, Num Columns: 2
    Version: 2, Serial Number: 20070200, Mod Counter: 110
    Clean: No, Status: 0
    sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
    Queue size: 100, blocksize: 512, numBlocks: 150004352
    RAID Level: 1
    Autoconfig: Yes
    Root partition: Yes
    Last configured as: raid0
    Component label for /dev/wd1d:
    Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
    Version: 2, Serial Number: 20070200, Mod Counter: 110
    Clean: No, Status: 0
    sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
    Queue size: 100, blocksize: 512, numBlocks: 150004352
    RAID Level: 1
    Autoconfig: Yes
    Root partition: Yes
    Last configured as: raid0
    Parity status: clean
    Reconstruction is 100% complete.
    Parity Re-write is 100% complete.
    Copyback is 100% complete.


    The crash:

    uvm_fault(0xd0821a20, 0xe9dbd00, 0, 1) -> e
    kernel: page fault trap, code=0
    Stopped at pmap_page_remove_86+0x125: movl 0(%eax,%edx,4),%eax


    show panic:
    the kernel did not panic


    trace:
    pmap_page_remove_86(d0cba46,d04b6474,8,246,d7fd738 a8) at
    pmap_page_remove_86+0x125
    uvm_vnp_terminate(d8135138,0,0,0,0,14,d81869d8,9) at uvm_vnp_terminate+0x36b
    uvm_attach(d8135138,0,b7,0,d7f4f89c) at uvn_attach+0x2d1
    uvmspace_free(d7f4f89c,6,d085f740) at uvmspace_free+0x10d
    uvm_exit(d7f3d87c,d038e561,8,286) at uvm_exit+0x1a
    reaper(d81869d8) at reaper+0x9a
    Bad frame pointer: 0xd0989eb8

  5. Re: raid problem?

    On Fri, 16 Feb 2007 12:05:00 +0100, Peter van Oord van der Vlies wrote:

    > I still had a swap on both disks so i use that one now and disabled the
    > swap on raid0.
    > It seems to work..


    Glad you got things working.

    > Here is the information you want(the fault and trace is below):




    > ...The crash:
    >
    > uvm_fault(0xd0821a20, 0xe9dbd00, 0, 1) -> e
    > kernel: page fault trap, code=0
    > Stopped at pmap_page_remove_86+0x125: movl 0(%eax,%edx,4),%eax
    >
    >
    > show panic:
    > the kernel did not panic
    >
    >
    > trace:
    > pmap_page_remove_86(d0cba46,d04b6474,8,246,d7fd738 a8) at
    > pmap_page_remove_86+0x125
    > uvm_vnp_terminate(d8135138,0,0,0,0,14,d81869d8,9) at uvm_vnp_terminate+0x36b
    > uvm_attach(d8135138,0,b7,0,d7f4f89c) at uvn_attach+0x2d1
    > uvmspace_free(d7f4f89c,6,d085f740) at uvmspace_free+0x10d
    > uvm_exit(d7f3d87c,d038e561,8,286) at uvm_exit+0x1a
    > reaper(d81869d8) at reaper+0x9a
    > Bad frame pointer: 0xd0989eb8


    Your snipped disklabels showed your IDE drives, but not your raid drives.
    I am going to guess your problem was a swap partition config issue,
    perhaps your raid disklabels have swap as the first partition, and there
    is no offset?

    Anyway, its just conjecture. You now have a working RAIDFrame system.
    Congratulations!


    --
    Replying directly will get you locally blacklisted.
    Change the address; use my first name in front of the @ if you want to
    communicate privately.


  6. Re: raid problem?

    Josh Grosse wrote:
    > On Fri, 16 Feb 2007 12:05:00 +0100, Peter van Oord van der Vlies wrote:
    >
    >> I still had a swap on both disks so i use that one now and disabled the
    >> swap on raid0.
    >> It seems to work..

    >
    > Glad you got things working.
    >


    The systems are still working. Done several rebuilds and kernel builds
    today where he normally crashed.


    >> Here is the information you want(the fault and trace is below):

    >
    >
    >
    >> ...The crash:
    >>
    >> uvm_fault(0xd0821a20, 0xe9dbd00, 0, 1) -> e
    >> kernel: page fault trap, code=0
    >> Stopped at pmap_page_remove_86+0x125: movl 0(%eax,%edx,4),%eax
    >>
    >>
    >> show panic:
    >> the kernel did not panic
    >>
    >>
    >> trace:
    >> pmap_page_remove_86(d0cba46,d04b6474,8,246,d7fd738 a8) at
    >> pmap_page_remove_86+0x125
    >> uvm_vnp_terminate(d8135138,0,0,0,0,14,d81869d8,9) at uvm_vnp_terminate+0x36b
    >> uvm_attach(d8135138,0,b7,0,d7f4f89c) at uvn_attach+0x2d1
    >> uvmspace_free(d7f4f89c,6,d085f740) at uvmspace_free+0x10d
    >> uvm_exit(d7f3d87c,d038e561,8,286) at uvm_exit+0x1a
    >> reaper(d81869d8) at reaper+0x9a
    >> Bad frame pointer: 0xd0989eb8

    >
    > Your snipped disklabels showed your IDE drives, but not your raid drives.
    > I am going to guess your problem was a swap partition config issue,
    > perhaps your raid disklabels have swap as the first partition, and there
    > is no offset?
    >

    No the first was a 2048M partition , the 2nd was the swap.

    > Anyway, its just conjecture. You now have a working RAIDFrame system.
    > Congratulations!
    >
    >

    Next time we buy them with hardware raid controllers and scsi disks.
    Normally we do that but only this st*pid customers wants this cheap
    systems.


  7. Re: raid problem?

    Josh Grosse wrote:
    > On Fri, 16 Feb 2007 12:05:00 +0100, Peter van Oord van der Vlies wrote:
    >
    >> I still had a swap on both disks so i use that one now and disabled the
    >> swap on raid0.
    >> It seems to work..

    >
    > Glad you got things working.
    >
    >> Here is the information you want(the fault and trace is below):

    >
    >
    >
    >> ...The crash:
    >>
    >> uvm_fault(0xd0821a20, 0xe9dbd00, 0, 1) -> e
    >> kernel: page fault trap, code=0
    >> Stopped at pmap_page_remove_86+0x125: movl 0(%eax,%edx,4),%eax
    >>
    >>
    >> show panic:
    >> the kernel did not panic
    >>
    >>
    >> trace:
    >> pmap_page_remove_86(d0cba46,d04b6474,8,246,d7fd738 a8) at
    >> pmap_page_remove_86+0x125
    >> uvm_vnp_terminate(d8135138,0,0,0,0,14,d81869d8,9) at uvm_vnp_terminate+0x36b
    >> uvm_attach(d8135138,0,b7,0,d7f4f89c) at uvn_attach+0x2d1
    >> uvmspace_free(d7f4f89c,6,d085f740) at uvmspace_free+0x10d
    >> uvm_exit(d7f3d87c,d038e561,8,286) at uvm_exit+0x1a
    >> reaper(d81869d8) at reaper+0x9a
    >> Bad frame pointer: 0xd0989eb8

    >
    > Your snipped disklabels showed your IDE drives, but not your raid drives.
    > I am going to guess your problem was a swap partition config issue,
    > perhaps your raid disklabels have swap as the first partition, and there
    > is no offset?
    >
    > Anyway, its just conjecture. You now have a working RAIDFrame system.
    > Congratulations!
    >
    >

    At the end of today the systems crashed again after the reboot command.
    I have looked at the disklabel and yes i am starting directly from
    offset 0 instead of 63. Can this be a real problem ? Tomorrow i will
    reconfigure the systems to test this.

    Peter

+ Reply to Thread