[WBEL-users] Software raid and User Mode Linux CD burning on WBEL

Tim Moore whitebox@nsr500.net
Tue, 03 Aug 2004 16:31:39 -0700


I've been using software RAID since the early patch days also and have 
never had a problem.  I've used a 3ware 6400 (4xATA/66) controller in both 
RAID and non-RAID modes and have found no performance difference.  As James 
suggests after you get beyond ~450MHz celeron uniprocessor, the CPU 
overhead is no longer important.

Note for mirror users: most if not all hardware controllers (3ware 
included) scatter internal bits on the drives during initialization which 
makes them NON-PORTABLE in case of a controller failure.  I found this out 
the hard trying to move half a 3ware RAID1 array to a different machine 
after a motherboard failure.

Currently I run a modified RAID0 setup with rsync to a different physical 
controller every two hours, and then to a different physical machine daily. 
  sda1 contains the boot block, / and /boot, and is dd'ed, including boot 
block, to sdb1 as part of the primary rsync run.  This allows a boot from 
either physical disk without root raid issues simply by passing different 
kernel params.

Performance is pretty much at the limits of ATA/66 on a matched pair of old 
IBM IC35L020AVER07's (ATA drives show up as SCSI devices on the 3ware 
controller):

[15:45] abit:~ > hdparm -tT /dev/sd{a,b}5 /dev/md0

/dev/sda5:
  Timing buffer-cache reads:   656 MB in  2.00 seconds = 328.00 MB/sec
  Timing buffered disk reads:  106 MB in  3.04 seconds =  34.87 MB/sec

/dev/sdb5:
  Timing buffer-cache reads:   640 MB in  2.00 seconds = 320.00 MB/sec
  Timing buffered disk reads:  106 MB in  3.01 seconds =  35.22 MB/sec

/dev/md0:
  Timing buffer-cache reads:   652 MB in  2.01 seconds = 324.38 MB/sec
  Timing buffered disk reads:  208 MB in  3.00 seconds =  69.33 MB/sec

[15:34] abit:~ > cat /proc/mdstat
Personalities : [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid0 sdb5[1] sda5[0]
       4192768 blocks 32k chunks

md1 : active raid0 sdb6[1] sda6[0]
       2425600 blocks 32k chunks

md2 : active raid0 sdb7[1] sda7[0]
       12707200 blocks 64k chunks

md3 : active raid0 sdb8[1] sda8[0]
       20113152 blocks 64k chunks

unused devices: <none>

[15:34] abit:~ > fdisk -l /dev/sd{a,b}

Disk /dev/sda: 255 heads, 63 sectors, 2501 cylinders
Units = cylinders of 16065 * 512 bytes

    Device Boot    Start       End    Blocks   Id  System
/dev/sda1   *         1        13    104391   83  Linux
/dev/sda2            14        45    257040   82  Linux swap
/dev/sda3            46      2500  19719787+   5  Extended
/dev/sda5            46       306   2096451   fd  Linux raid autodetect
/dev/sda6           307       457   1212876   fd  Linux raid autodetect
/dev/sda7           458      1248   6353676   fd  Linux raid autodetect
/dev/sda8          1249      2500  10056658+  fd  Linux raid autodetect

Disk /dev/sdb: 255 heads, 63 sectors, 2501 cylinders
Units = cylinders of 16065 * 512 bytes

    Device Boot    Start       End    Blocks   Id  System
/dev/sdb1   *         1        13    104391   83  Linux
/dev/sdb2            14        45    257040   82  Linux swap
/dev/sdb3            46      2500  19719787+   5  Extended
/dev/sdb5            46       306   2096451   fd  Linux raid autodetect
/dev/sdb6           307       457   1212876   fd  Linux raid autodetect
/dev/sdb7           458      1248   6353676   fd  Linux raid autodetect
/dev/sdb8          1249      2500  10056658+  fd  Linux raid autodetect

[15:34] abit:~ > mount -l | grep data=
/dev/sda1 on / type ext3 (rw,data=ordered) [root]
/dev/md0 on /usr type ext3 (rw,noatime,data=writeback) [usr]
/dev/md1 on /home type ext3 (rw,noatime,data=writeback) [home]
/dev/md2 on /big type ext3 (rw,noatime,data=writeback) [big]
/dev/md3 on /spare type ext3 (rw,noatime,data=writeback) [spare]
/dev/hdc1 on /snapshot type ext3 (ro,data=journal) [snap]

James Knowles wrote:
> 
>> Along these lines, does anyone know how much overhead Software RAID 
>> introduces?none on /proc type proc (rw)

> 
> 
> 
> I'd like to counter the idea that soft RAID introduces a lot of 
> overhead. I've been using it since it was first introduced into the kernel.
> 
> It doesn't really add much overhead, even on the pokey 233MHz server 
> (soft RAID-5) that's been in service since 233MHz Pentrium-II was the 
> rage of the day. For old machines I expect a 5% increase in CPU overhead 
> with IDE, lower with SCSI. The worst I ever saw was 15% CPU increase 
> under bitter load and IDE drives.
> 
> Of course, on modern multi-GHz processors, soft RAID hardly impacts the 
> machine.
> 
> Since soft RAID was stabilized in the kernel years ago, I've never seen 
> it put data at risk, including the harshest loads.
> 
> Since hard drives have become cheap, we've always run our workstations 
> with soft RAID-1, which has saved our bacon more than once over the 
> years. It costs more to slap in an extra hard drive, but it's cheaper 
> than what it does to a business on the rare day when a HD decides to 
> roll over and die. The worst case is when the HD takes out swap, the 
> kernel panics, and we have to yank the cable to the offending drive. 
> When there's a failure, we can wait until after hours to actually swap 
> out the drive.
> 
> Anaconda has a nice interface for setting up soft RAID during install.
> 

-- 
  | for direct reply add "private_" in front of user name