[WBEL-users] multiple md devices & lvm
Scott Silva
ssilva at sgvwater.com
Tue Feb 8 14:39:30 CST 2005
Kirby C. Bohling wrote:
> On Tue, Feb 08, 2005 at 08:59:17AM -0500, Toby Bluhm wrote:
>
>>Has anyone tried using lvm on a box that has more than one md device with one device being raid5?
>
>
> I've done that.
>
> [root at vulture root]# cat /proc/mdstat
> Personalities : [raid1] [raid5]
> read_ahead 1024 sectors
> Event: 5
> md4 : active raid5 hda1[0] hdb1[1] hdc1[2] hdd1[3]
> 878907648 blocks level 5, 256k chunk, algorithm 0 [4/4] [UUUU]
>
> md1 : active raid1 sdb1[1] sda1[0]
> 208704 blocks [2/2] [UU]
>
> [root at vulture root]# cat /etc/issue
> White Box Enterprise Linux release 3.0 (Liberation Respin 1)
> Kernel \r on an \m
>
> [root at vulture root]# uname -a
> Linux vulture.birddog.com 2.4.21-15.0.2.ELsmp #1 SMP Fri Jun 18 23:13:20 EDT 2004 i686 i686 i386 GNU/Linux
>
> (It's a little behind on the kernel, but everything on the system is
> LVM except for /, /boot and swap).
>
>
>>The box has / & /boot on mirrored md. I can create many more
>>striped or mirrored devices with the other disks & add them to
>>lvm, create a vg, create a lv, create ext3 fs and it all works
>>fine. However, if I make a raid5, add it to lvm, etc it all works
If you didn't have raid-5 before, maybe your initrd doesn't have raid-5
Try and make a new initrd after you make the raid5.
>>fine until I reboot - system complains it can't find ext2
>>superblocks on the lv. If I just put the fs right on the md, that
>>works fine too. Just seems that when lvm is involved, it goes down
>>the toilet.
--
"If you have ever eaten crow,
It don't taste like chicken!!"
More information about the Whitebox-users
mailing list