[WBEL-users] multiple md devices & lvm

Toby Bluhm tkb9 at adelphia.net
Tue Feb 8 15:10:57 CST 2005


 
---- "Kirby C. Bohling" <kbohling at birddog.com> wrote: 
> On Tue, Feb 08, 2005 at 08:59:17AM -0500, Toby Bluhm wrote:
> > Has anyone tried using lvm on a box that has more than one md device with one device being raid5?
> 
> I've done that.
> 
> [root at vulture root]# cat /proc/mdstat
> Personalities : [raid1] [raid5]
> read_ahead 1024 sectors
> Event: 5
> md4 : active raid5 hda1[0] hdb1[1] hdc1[2] hdd1[3]
>       878907648 blocks level 5, 256k chunk, algorithm 0 [4/4] [UUUU]
> 
> md1 : active raid1 sdb1[1] sda1[0]
>       208704 blocks [2/2] [UU]
> 
> [root at vulture root]# cat /etc/issue
> White Box Enterprise Linux release 3.0 (Liberation Respin 1)
> Kernel \r on an \m
> 
> [root at vulture root]# uname -a
> Linux vulture.birddog.com 2.4.21-15.0.2.ELsmp #1 SMP Fri Jun 18 23:13:20 EDT 2004 i686 i686 i386 GNU/Linux
> 
> (It's a little behind on the kernel, but everything on the system is
> LVM except for /, /boot and swap).
> 
> > 
> > The box has / & /boot on mirrored md. I can create many more
> > striped or mirrored devices with the other disks & add them to
> > lvm, create a vg, create a lv, create ext3 fs and it all works
> > fine. However, if I make a raid5, add it to lvm, etc it all works
> > fine until I reboot - system complains it can't find ext2
> > superblocks on the lv.  If I just put the fs right on the md, that
> > works fine too. Just seems that when lvm is involved, it goes down
> > the toilet.
> 
> My guess is that you need to rebuild your initrd w/ the raid5 (and
> the xor) modules.  You'll need the RAID5 added early on so the LVM
> can autodetect in with vgscan/vgchange while running
> /etc/rc.d/rc.sysinit if I remember correctly.
> 
> I ran into this on several machines where I added RAID5 after the
> initial install.
> 
> There are three ways to do this:
> 
> 1.  Do it by hand by munging the initrd by mounting it loopback.
> 
> 2.  Just uninstall the kernel, and then reinstall the kernel.
> (Alternatively, install an older kernel, reboot onto it, then
> uninstall the current kernel, then reinstall it, then reboot to the
> newer kernel).  That should fix the problem.
> 
> 3.  Run mkinitrd by hand with the proper options.
> 

I did run mkinitrd once everything was setup and running. But I did not specify any modules, just a naked mkinitrd:

mv /boot/initrd-2.4.21-27.0.2.ELsmp.img /boot/initrd-2.4.21-27.0.2.ELsmp.img.bak
mkinitrd /boot/initrd-2.4.21-27.0.2.ELsmp.img 2.4.21-27.0.2.ELsmp
llilo -v ( just a sanity check )

Then rebooted.

Are there other options I should be using with mkinitrd?

Seeing that it works for other folks, I think I'll break it all down (again) and start over. Maybe I jumped around too much in troubleshooting & missed something.

> > 
> > I have other wb3 boxes using lvm and a raid5 as the only md device
> > with no problems.  I ran into this before, on a rh9 box, but I
> > thought it was due to me trying & tweaking to get xfs & other
> > stuff working using a stock kernel., non-standard rpms, etc.
> 
> Let me guess you added the RAID5 after you did the initial install.

Yessir.

> Gets me every time.

Hate it when that happens.

> 
> > 
> > Is this an lvm thing or a RedHat thing?
> > 
> > Thanks
> > 



More information about the Whitebox-users mailing list