I have a single linux system with a 40 GB drive that is getting full.
This system is using LVM for everything except /boot and swap.
I want to replace it with two 80 GB drives doing RAID1 (miroring)
for extra reliability.

Ideally, I would like to achieve this without a complete reinstall,
reconfiguration, and restoration of data from backups.

After much research on the web, it looked like this was possible.
Fortunately, I have access to a PC with VMWare and could experiment
until I got it right. While a large number of sites provided good
instrucitons, none of them worked as-is. The solution I record below
is not only a conglomeration of what I learned from those sites
but is a step-by-step list of what I did that worked in my
experimentation with a VMWare guest system with a Red Hat 9 "minimal"
installation.

What I would like is to see if anyone has any comments or suggestions
for improving this. I still have a few days before the new drives
arrive. Because part of this procedure involves migrating all the
data residing on an LVM VG from the old PV to the new PV, the
original disk will no longer contain any of my data, so this is a
one-way process if something goes wrong, and I only get one chance to
get it right. Of course I am backing up all of my data, but I really
want to avoid actually needing to use those backups.

SYSTEM DESCRIPTION:
OS: Red Hat 9, kept up to date on fixes
Drive partitioning:
hda1 /boot
hda2 swap
hda3 LVM PV in VG "rootvg"
LVM rootvg:
lv_root /
lv_usr /usr
lv_tmp /tmp
lv_home /home
lv_var /var

DESIRED RESULT:
Drive partitioning:
hda1 /dev/md0
hda2 /dev/md1
hda3 /dev/md2
hdc1 /dev/md0
hdc2 /dev/md1
hdc3 /dev/md2
RAID1:
md0 /boot
md1 swap
md2 LVM PV in VG "rootvg"
LVM rootvg:
lv_root /
lv_usr /usr
lv_tmp /tmp
lv_home /home
lv_var /var


SOLUTION:
1) Move the CD drive from hdc to hdd.
2) Install the new disk that will be hdc.
3) Install the new disk that will be the new hda,
but install it as hdb temporarily.
4) Boot the system.
5) Use fdisk to partition both disks *identically*,
remembering to mark partition 1 bootable.
$ fdisk /dev/hdb
$ fdisk /dev/hdc
Device Boot Start End Blocks Id System
/dev/hda1 * 1 74 298336+ fd Linux raid autodetect
/dev/hda2 75 148 298368 fd Linux raid autodetect
/dev/hda3 149 780 2548224 fd Linux raid autodetect
6) Create /etc/raidtab as follows:
##################################
# raidtab
##################################
# /boot
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
chunk-size 64k
persistent-superblock 1
device /dev/hdb1
failed-disk 0
device /dev/hdc1
raid-disk 1
# swap
raiddev /dev/md1
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
chunk-size 64k
persistent-superblock 1
device /dev/hdb2
failed-disk 0
device /dev/hdc2
raid-disk 1
# LVM PV
raiddev /dev/md2
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
chunk-size 64k
persistent-superblock 1
device /dev/hdb3
failed-disk 0
device /dev/hdc3
raid-disk 1
##################################
7) Create the RAID devices:
$ mkraid /dev/md0
$ mkraid /dev/md1
$ mkraid /dev/md2
$ cat /proc/mdstat
Wait until all devices are finished, when you should see:
Personalities : [raid1]
read_ahead 1024 sectors
md2 : active raid1 hdc3[1] hdb3[0]
2548160 blocks [2/2] [UU]

md1 : active raid1 hdc2[1] hdb2[0]
298304 blocks [2/2] [UU]

md0 : active raid1 hdc1[1] hdb1[0]
298240 blocks [2/2] [UU]
unused devices:
8) Build the /boot device.
$ mke2fs -j /dev/md0
$ mkdir /mnt/md0
$ mount /dev/md0 /mnt/md0
9) Build a new initrd image that contains the RAID drivers,
or you'll end up with a kernel panic because the / filesystem
couldn't be found because the LVM VG couldn't be found because
the PV it is on couldn't be found because the RAID device was
not started because the initrd did not have the RAID drivers.
(It took me ages to figure this out, and even now I can not
find any hint about this problem on google searches.)
$ cd /boot
For each initrd* image:
$ mv initrd-2.4.20-8.img initrd-2.4.20-8.img.orig
$ mkinitrd initrd-2.4.20-8.img 2.4.20-8
10) Copy the current /boot files to the new device:
$ find . -xdev | cpio -pm /mnt/md0
$ cd /
$ umount /dev/md0
$ unmount /boot
$ mount /dev/md0 /boot
11) Install grub on the MBR of each new drive.
$ grub
grub> device (hd0) /dev/hdb
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
Checking if "/boot/grub/stage1" esists... no
Checking if "/grub/stage1" esists... yes
Checking if "/grub/stage2" esists... yes
Checking if "/grub/e2fs_stage1_5" esists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub
.conf"... succeeded
Done.
grub> device (hd0) /dev/hdc
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
Checking if "/boot/grub/stage1" esists... no
Checking if "/grub/stage1" esists... yes
Checking if "/grub/stage2" esists... yes
Checking if "/grub/e2fs_stage1_5" esists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub
.conf"... succeeded
Done.
grub> quit
12) Edit /etc/fstab to change "LABEL=/boot" to "/dev/md0".
13) Build the swap device.
$ mkswap /dev/md1
$ swapon /dev/md1
$ swapon -s
14) Edit /etc/fstab to change "/dev/hda2" to "/dev/md1".
15) Build the LVM PV device, and move the VG completely onto it.
$ pvcreate /dev/md2
$ pvscan
$ vgextend rootvg /dev/md2
$ pvmove -v /dev/hda3 /dev/md2
$ pvdisplay /dev/hda3
$ vgreduce rootvg /dev/hda3
$ pvscan
16) Hardware change:
$ shutdown -h now
Remove the original hda.
Move hdb to be hda.
Boot up.
17) Edit /etc/raidtab to change 'hdb?' to 'hda?' in all three places.
This step appears to be mere cosmetics now, but for sanity, it is
probably best to keep it in sync with reality.

....And that's all!

Check that it's working:
cat /proc/mdstat
mount
swapon -s

For more thorough testing:
boot with both disks
remove hdc only
remove hda only (requires a bios that can boot from secondary bus)
swap the disks