This is a discussion on software raid1 on root woes - Suse ; I am just installing a new 'server' running SuSE V10.2 Linux. I want some redundancy on disks, so I created a software raid set. The problem I am facing is that after a re-boot the system comes up with hda1 ...
I am just installing a new 'server' running SuSE V10.2 Linux. I want some
redundancy on disks, so I created a software raid set. The problem I am
facing is that after a re-boot the system comes up with hda1 degraded, hdc
is missing. Using mdadm to add hdc1 to the raid set causes the system to
become unresponsive, with a re-build in progress with a speed of between
800 and 1400K/sec. cat /proc/mdstat tells that the rebuild will finish in
about 600 minutes (10 Hours!) and processes md1_raid1 and md1_resync taking
up all processing time.
The disk layout is as follows:
hda on ide1 primary
hdc on ide2 primary
The following partitions are on the disk:
hda2 512MB boot
and in the extended partition
hda6 40GB data
hda1, hda2, hda3 and hda6 are mirrored onto hdc. The partitions are of
exactly equal size, although the disks are of different make. to create the
raid, I pretty much followed
I have cloned this system to a bootable usb drive, which when started
detects the raid arrays and creates the corresponding md devices correctly.
Also here, md1 comes up degraded. Running mdadm to add hdc1 takes about 15
minutes to re-build; but this is not on the system disk.
What is wrong with the system disk that after a clean shutdown the root
partition comes up as degraded? How to fix it?
Why is the re-build consuming all cpu when running on the system disk, but
runs normally when run from rescue disk?
What do I overlook?
Thanks in advance