<!DOCTYPE linuxdoc system
[ <!ENTITY CurrentVer "0.90.2 - Alpha">
  <!ENTITY mdstat "<TT>/proc/mdstat</TT>">
  <!ENTITY ftpkernel "<TT>ftp://ftp.fi.kernel.org/pub/linux</TT>">
  <!ENTITY fstab "<TT>/etc/fstab</TT>">
  <!ENTITY raidtab "<TT>/etc/raidtab</TT>">
]
>

<ARTICLE>

<TITLE>The Software-RAID HOWTO
<AUTHOR>Jakob &Oslash;stergaard
        (<htmlurl
        url="mailto:jakob@ostenfeld.dk"
        name="jakob@ostenfeld.dk">)
<DATE>v. &CurrentVer;, 27th february 1999

<ABSTRACT>
This HOWTO describes how to use Software RAID under Linux. You must be
using the RAID patches available from <htmlurl
name="ftp://ftp.fi.kernel.org/pub/linux/daemons/raid/alpha"
url="ftp://ftp.fi.kernel.org/pub/linux/daemons/raid/alpha">. The HOWTO
can be found at <htmlurl
name="http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/"
url="http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/">.
</ABSTRACT>

<TOC>

<SECT>Introduction
<P>
This howto is written by Jakob &Oslash;stergaard based on a large
number of emails between the author and Ingo Molnar (<htmlurl
url="mailto:mingo@chiara.csoma.elte.hu"
name="mingo@chiara.csoma.elte.hu">) -- one of the RAID developers --,
the linux-raid mailing list (<htmlurl
url="mailto:linux-raid@vger.rutgers.edu"
name="linux-raid@vger.rutgers.edu">) and various other people.
<P>
The reason this HOWTO was written even though a Software-RAID HOWTO
allready exists is, that the old HOWTO describes the old-style
Software RAID found in the stock kernels. This HOWTO describes the use
of the ``new-style'' RAID that has been developed more recently. The
new-style RAID has a lot of features not present in old-style RAID.
<P>
Some of the information in this HOWTO may seem trivial, if you know
RAID all ready. Just skip those parts.
<P>

<SECT1>Disclaimer
<P>
The mandatory disclaimer:
<P>
Although RAID seems stable for me, and stable for many other people,
it may not work for you.  If you loose all your data, your job, get
hit by a truck, whatever, it's not my fault, nor the developers'.  Be
aware, that you use the RAID software and this information at
your own risk!  There is no guarantee whatsoever, that any of the
software, or this information, is in anyway correct, nor suited for
any use whatsoever. Back up all your data before experimenting with
this. Better safe than sorry.
<P>

<SECT1>Requirements
<P>
This HOWTO assumes you are using a late 2.2.x or 2.0.x kernel with a
matching raid0145 patch, and the 0.90 version of the raidtools. Both
can be found at <HTMLURL
name="ftp://ftp.fi.kernel.org/pub/linux/daemons/raid/alpha"
url="ftp://ftp.fi.kernel.org/pub/linux/daemons/raid/alpha">. The RAID
patch, the raidtools package, and the kernel should all match as close
as possible. At times it can be necessary to use older kernels if raid
patches are not available for the latest kernel.
<P>

<SECT>Why RAID ?
<P>
There can be many good reasons for using RAID. A few are; the ability
to combine several physical disks into one larger ``virtual'' device,
performance improvements, and redundancy.
<P>

<SECT1>Technicalities
<P>
Linux RAID can work on most block devices. It doesn't matter whether
you use IDE or SCSI devices, or a mixture. Some people have also used
the Network Block Device (NBD) with more or less success.
<P>
Be sure that the bus(ses) to the drives are fast enough. You shouldn't
have 14 UW-SCSI drives on one UW bus, if each drive can give 10 MB/s
and the bus can only sustain 40 MB/s.
Also, you should only have one device per IDE bus. Running disks as
master/slave is horrible for performance. IDE is really bad at
accessing more that one drive per bus.  Of Course, all newer
motherboards have two IDE busses, so you can set up two disks in
RAID without buying more controllers.
<P>
The RAID layer has absolutely nothing to do with the filesystem
layer. You can put any filesystem on a RAID device, just like any
other block device.
<P>

<SECT1>Terms
<P>
The word ``RAID'' means ``Linux Software RAID''. This HOWTO does not
treat any aspects of Hardware RAID.
<P>
When describing setups, it is useful to refer to the number of disks
and their sizes. At all times the letter <BF>N</BF> is used to denote
the number of active disks in the array (not counting
spare-disks). The letter <BF>S</BF> is the size of the smallest drive
in the array, unless otherwise mentioned. The letter <BF>P</BF> is
used as the performance of one disk in the array, in MB/s. When used,
we assume that the disks are equally fast, which may not always be true.
<P>
Note that the words ``device'' and ``disk'' are supposed to mean about
the same thing.  Usually the devices that are used to build a RAID
device are partitions on disks, not necessarily entire disks.  But
combining several partitions on one disk usually does not make sense,
so the words devices and disks just mean ``partitions on different
disks''.
<P>

<SECT1>The RAID levels
<P>
Here's a short description of what is supported in the Linux RAID
patches. Some of this information is absolutely basic RAID info, but
I've added a few notices about what's special in the Linux
implementation of the levels.  Just skip this section if you know
RAID. Then come back when you are having problems   :)
<P>
The current RAID patches for Linux supports the following
levels:
<ITEMIZE>
<ITEM><BF>Linear mode</BF>
<ITEMIZE>
<ITEM>Two or more disks are combined into one physical device. The disks
are ``appended'' to each other, so writing to the RAID device will fill
up disk 0 first, then disk 1 and so on. The disks does not have to be
of the same size. In fact, size doesn't matter at all here   :)
<ITEM>There is no redundancy in this level. If one disk crashes you will
most probably loose all your data.  You can however be lucky to
recover some data, since the filesystem will just be missing one large
consecutive chunk of data.
<ITEM>The read and write performance will not increase for single
reads/writes. But if several users use the device, you may be lucky
that one user effectively is using the first disk, and the other user
is accessing files which happen to reside on the second disk. If that
happens, you will see a performance gain.
</ITEMIZE>
<ITEM><BF>RAID-0</BF>
<ITEMIZE>
<ITEM>Also called ``stripe'' mode. Like linear mode, except that reads and
writes are done in parallel to the devices. The devices should have
approximately the same size. Since all access is done in parallel, the
devices fill up equally. If one device is much larger than the other
devices, that extra space is still utilized in the RAID device, but
you will be accessing this larger disk alone, during writes in the
high end of your RAID device. This of course hurts performance.
<ITEM>Like linear, there's no redundancy in this level either. Unlike
linear mode, you will not be able to rescue any data if a drive
fails. If you remove a drive from a RAID-0 set, the RAID device will
not just miss one consecutive block of data, it will be filled with
small holes all over the device. e2fsck will probably not be able to
recover much from such a device.
<ITEM>The read and write performance will increase, because reads and
writes are done in parallel on the devices. This is usually the main
reason for running RAID-0. If the busses to the disks are fast enough,
you can get very close to N*P MB/sec.
</ITEMIZE>
<ITEM><BF>RAID-1</BF>
<ITEMIZE>
<ITEM>This is the first mode which actually has redundancy. RAID-1 can be
used on two or more disks with zero or more spare-disks. This mode maintains
an exact mirror of the information on one disk on the other
disk(s). Of Course, the disks must be of equal size. If one disk is
larger than another, your RAID device will be the size of the
smallest disk.
<ITEM>If up to N-1 disks are removed (or crashes), all data are still intact. If
there are spare disks available, and if the system (eg. SCSI drivers
or IDE chipset etc.) survived the crash, reconstruction of the mirror
will immediately begin on one of the spare disks, after detection of
the drive fault.
<ITEM>Read performance will usually scale close to to N*P, while write performance is
the same as on one device, or perhaps even less.  Reads can be done in
parallel, but when writing, the CPU must transfer N times as much data
to the disks as it usually would (remember, N identical copies of
all data must be sent to the disks).
</ITEMIZE>
<ITEM><BF>RAID-4</BF>
<ITEMIZE>
<ITEM>This RAID level is not used very often. It can be used on three
or more disks. Instead of completely mirroring the information, it
keeps parity information on one drive, and writes data to the other
disks in a RAID-0 like way.  Because one disks is reserved for parity
information, the size of the array will be (N-1)*S, where S is the
size of the smallest drive in the array. As in RAID-1, the disks should either
be of equal size, or you will just have to accept that the S in the
(N-1)*S formula above will be the size of the smallest drive in the
array.
<ITEM>If one drive fails, the parity
information can be used to reconstruct all data.  If two drives fail,
all data is lost.
<ITEM>The reason this level is not more frequently used, is because
the parity information is kept on one drive. This information must be
updated <EM>every</EM> time one of the other disks are writte
to. Thus, the parity disk will become a bottleneck, if it is not a lot
faster than the other disks.  However, if you just happen to have a
lot of slow disks and a very fast one, this RAID level can be very useful.
</ITEMIZE>
<ITEM><BF>RAID-5</BF>
<ITEMIZE>
<ITEM>This is perhaps the most useful RAID mode when one wishes to combine
a larger number of physical disks, and still maintain some
redundancy. RAID-5 can be used on three or more disks, with zero or
more spare-disks. The resulting RAID-5 device size will be (N-1)*S,
just like RAID-4. The big difference between RAID-5 and -4 is, that
the parity information is distributed evenly among the participating
drives, avoiding the bottleneck problem in RAID-4.
<ITEM>If one of the disks fail, all data are still intact, thanks to the
parity information. If spare disks are available, reconstruction will
begin immediately after the device failure.  If two disks fail
simultaneously, all data are lost. RAID-5 can survive one disk
failure, but not two or more.
<ITEM>Both read and write performance usually increase, but it's hard to
predict how much.
</ITEMIZE>
</ITEMIZE>

<SECT2>Spare disks
<P>
Spare disks are disks that do not take part in the RAID set until one
of the active disks fail.  When a device failure is detected, that
device is marked as ``bad'' and reconstruction is immediately started
on the first spare-disk available.
<P>
Thus, spare disks add a nice extra safety to especially RAID-5 systems
that perhaps are hard to get to (physically). One can allow the system
to run for some time, with a faulty device, since all redundancy is
preserved by means of the spare disk.
<P>
You cannot be sure that your system will survive a disk crash. The
RAID layer should handle device failures just fine, but SCSI drivers
could be broken on error handling, or the IDE chipset could lock up,
or a lot of other things could happen.
<P>


<SECT1>Swapping on RAID
<P>
There's no reason to use RAID for swap performance reasons. The kernel
itself can stripe swapping on several devices, if you just give them
the same priority in the fstab file.
<P>
A nice fstab looks like:
<VERB>
/dev/sda2       swap           swap    defaults,pri=1   0 0
/dev/sdb2       swap           swap    defaults,pri=1   0 0
/dev/sdc2       swap           swap    defaults,pri=1   0 0
/dev/sdd2       swap           swap    defaults,pri=1   0 0
/dev/sde2       swap           swap    defaults,pri=1   0 0
/dev/sdf2       swap           swap    defaults,pri=1   0 0
/dev/sdg2       swap           swap    defaults,pri=1   0 0
</VERB>
This setup lets the machine swap in parallel on seven SCSI devices. No
need for RAID, since this has been a kernel feature for a long time.
<P>
Another reason to use RAID for swap is high availability.  If you set
up a system to boot on eg. a RAID-1 device, the system should be able
to survive a disk crash. But if the system has been swapping on the
now faulty device, you will for sure be going down.  Swapping on the
RAID-1 device would solve this problem. 
<P>
However, swap on RAID-{1,4,5} is <BF>NOT</BF> supported. You can set it up,
but it will crash. The reason is, that the RAID layer sometimes
allocates memory before doing a write. This leads to a deadlock, since
the kernel will have to allocate memory before it can swap, and swap
before it can allocate memory.
<P>
It's sad but true, at least for now.
<P>

<SECT>RAID setup
<P>
<SECT1>General setup
<P>
This is what you need for any of the RAID levels:
<ITEMIZE>
<ITEM>A kernel.  Get 2.0.36 or a recent 2.2.x kernel.
<ITEM>The RAID patches.  There usually is a patch available
   for the recent kernels.
<ITEM>The RAID tools.
<ITEM>Patience, Pizza, and your favourite caffeinated beverage.
</ITEMIZE>
<P>
All this software can be found at &ftpkernel; The RAID
tools and patches are in the <TT>daemons/raid/alpha</TT>
subdirectory. The kernels are found in the <TT>kernel</TT>
subdirectory.
<P>
Patch the kernel, configure it to include RAID support for the level
you want to use.  Compile it and install it.
<P>
Then unpack, configure, compile and install the RAID tools.
<P>
Ok, so far so good.  If you reboot now, you should have a file called
&mdstat;.  Remember it, that file is your friend. See
what it contains, by doing a <TT>cat </TT>&mdstat;. It should
tell you that you have the right RAID personality (eg. RAID mode)
registered, and that no RAID devices are currently active.
<P>
Create the partitions you want to include in your RAID set.
<P>
Now, let's go mode-specific.
<P>

<SECT1>Linear mode
<P>
Ok, so you have two or more partitions which are not necessarily the
same size (but of course can be), which you want to append to
each other.
<P>
Set up the &raidtab; file to describe your
setup. I set up a raidtab for two disks in linear mode, and the file
looked like this:
<P>
<VERB>
raiddev /dev/md0
        raid-level      linear
        nr-raid-disks   2
        persistent-superblock 1
        device          /dev/sdb6
        raid-disk       0
        device          /dev/sdc5
        raid-disk       1
</VERB>
Spare-disks are not supported here.  If a disk dies, the array dies
with it. There's no information to put on a spare disk.
<P>
Ok, let's create the array. Run the command
<VERB>
  mkraid /dev/md0
</VERB>
<P>
This will initialize your array, write the persistent superblocks, and
start the array.
<P>
Have a look in &mdstat;. You should see that the array is running.
<P>
Now, you can create a filesystem, just like you would on any other
device, mount it, include it in your fstab and so on.
<P>

<SECT1>RAID-0
<P>
You have two or more devices, of approximately the same size, and you 
want to combine their storage capacity and also combine their
performance by accessing them in parallel.
<P>
Set up the &raidtab; file to describe your configuration. An
example raidtab looks like:
<VERB>
raiddev /dev/md0
        raid-level      0
        nr-raid-disks   2
        persistent-superblock 1
        chunk-size     4
        device          /dev/sdb6
        raid-disk       0
        device          /dev/sdc5
        raid-disk       1
</VERB>
Like in Linear mode, spare disks are not supported here either. RAID-0
has no redundancy, so when a disk dies, the array goes with it.
<P>
Again, you just run 
<VERB>
  mkraid /dev/md0
</VERB>
to initialize the array. This should initialize the superblocks and
start the raid device.  Have a look in &mdstat; to see what's
going on. You should see that your device is now running.
<P>
/dev/md0 is now ready to be formatted, mounted, used and abused.
<P>

<SECT1>RAID-1
<P>
You have two devices of approximately same size, and you want the two
to be mirrors of each other. Eventually you have more devices, which
you want to keep as stand-by spare-disks, that will automatically
become a part of the mirror if one of the active devices break.
<P>
Set up the &raidtab; file like this:
<VERB>
raiddev /dev/md0
        raid-level      1
        nr-raid-disks   2
        nr-spare-disks  0
        chunk-size     4
        persistent-superblock 1
        device          /dev/sdb6
        raid-disk       0
        device          /dev/sdc5
        raid-disk       1
</VERB>
If you have spare disks, you can add them to the end of the device
specification like
<VERB>
        device          /dev/sdd5
        spare-disk      0
</VERB>
Remember to set the nr-spare-disks entry correspondingly.
<P>
Ok, now we're all set to start initializing the RAID. The mirror must
be constructed, eg. the contents (however unimportant now, since the
device is still not formatted) of the two devices must be
synchronized.
<P>
Issue the
<VERB>
  mkraid /dev/md0
</VERB>
command to begin the mirror initialization.
<P>
Check out the &mdstat; file. It should tell you that the /dev/md0
device has been started, that the mirror is being reconstructed, and
an ETA of the completion of the reconstruction.
<P>
Reconstruction is done using idle I/O bandwidth. So, your system
should still be fairly responsive, although your disk LEDs should be
glowing nicely.
<P>
The reconstruction process is transparent, so you can actually use the
device even though the mirror is currently under reconstruction.
<P>
Try formatting the device, while the reconstruction is running. It
will work.  Also you can mount it and use it while reconstruction is
running. Of Course, if the wrong disk breaks while the reconstruction
is running, you're out of luck.
<P>

<SECT1>RAID-4
<P>
<BF>Note!</BF> I haven't tested this setup myself. The setup below is
my best guess, not something I have actually had up running.
<P>
You have three or more devices of roughly the same size, one device is
significantly faster than the other devices, and you want to combine
them all into one larger device, still maintaining some redundancy
information.
Eventually you have a number of devices you wish to use as
spare-disks.
<P>
Set up the /etc/raidtab file like this:
<VERB>
raiddev /dev/md0
        raid-level      4
        nr-raid-disks   4
        nr-spare-disks  0
	persistent-superblock 1
        chunk-size      32
        device          /dev/sdb1
        raid-disk       0
        device          /dev/sdc1
        raid-disk       1
        device          /dev/sdd1
        raid-disk       2
        device          /dev/sde1
        raid-disk       3
</VERB>
If we had any spare disks, they would be inserted in a similar way,
following the raid-disk specifications;
<VERB>
        device         /dev/sdf1
        spare-disk     0
</VERB>
as usual.
<P>
Your array can be initialized with the
<VERB>
   mkraid /dev/md0
</VERB>
command as usual.
<P>
You should see the section on special options for mke2fs before
formatting the device.
<P>


<SECT1>RAID-5
<P>
You have three or more devices of roughly the same size, you want to
combine them into a larger device, but still to maintain a degree of
redundancy for data safety. Eventually you have a number of devices to
use as spare-disks, that will not take part in the array before
another device fails.
<P>
If you use N devices where the smallest has size S, the size of the
entire array will be (N-1)*S. This ``missing'' space is used for
parity (redundancy) information.  Thus, if any disk fails, all data
stay intact. But if two disks fail, all data is lost.
<P>
Set up the /etc/raidtab file like this:
<VERB>
raiddev /dev/md0
        raid-level      5
        nr-raid-disks   7
        nr-spare-disks  0
	persistent-superblock 1
        parity-algorithm        left-symmetric
        chunk-size      32
        device          /dev/sda3
        raid-disk       0
        device          /dev/sdb1
        raid-disk       1
        device          /dev/sdc1
        raid-disk       2
        device          /dev/sdd1
        raid-disk       3
        device          /dev/sde1
        raid-disk       4
        device          /dev/sdf1
        raid-disk       5
        device          /dev/sdg1
        raid-disk       6
</VERB>
If we had any spare disks, they would be inserted in a similar way,
following the raid-disk specifications;
<VERB>
        device         /dev/sdh1
        spare-disk     0
</VERB>
And so on.
<P>
A chunk size of 32 KB is a good default for many general purpose
filesystems of this size. The array on which the above raidtab is
used, is a 7 times 6 GB = 36 GB (remember the (n-1)*s = (7-1)*6 = 36)
device. It holds an ext2 filesystem with a 4 KB block size.  You could
go higher with both array chunk-size and filesystem block-size if your
filesystem is either much larger, or just holds very large files.
<P>
Ok, enough talking. You set up the raidtab, so let's see if it
works. Run the 
<VERB>
  mkraid /dev/md0
</VERB>
command, and see what happens.  Hopefully your disks start working
like mad, as they begin the reconstruction of your array. Have a look
in &mdstat; to see what's going on.
<P>
If the device was successfully created, the reconstruction process has
now begun.  Your array is not consistent until this reconstruction
phase has completed. However, the array is fully functional (except
for the handling of device failures of course), and you can format it
and use it even while it is reconstructing.
<P>
See the section on special options for mke2fs before formatting the
array.
<P>
Ok, now when you have your RAID device running, you can always stop it
or re-start it using the
<VERB>
  raidstop /dev/md0
</VERB>
or
<VERB>
  raidstart /dev/md0
</VERB>
commands.
<P>
Instead of putting these into init-files and rebooting a zillion times
to make that work, read on, and get autodetection running.
<P>

<SECT1>The Persistent Superblock
<P>
Back in ``The Good Old Days'' (TM), the raidtools would read your
&raidtab; file, and then initialize the array.  However, this would
require that the filesystem on which &raidtab; resided was
mounted. This is unfortunate if you want to boot on a RAID.
<P>
Also, the old approach led to complications when mounting filesystems
RAID devices. They could not be put in the &fstab; file as usual, but
would have to be mounted from the init-scripts.
<P>
The persistent superblocks solve these problems. When an array is
initialized with the <TT>persistent-superblock</TT> option in the
&raidtab; file, a special superblock is written in the beginning of
all disks participating in the array. This allows the kernel to read
the configuration of RAID devices directly from the disks involved,
instead of reading from some configuration file that may not be
available at all times.
<P>
You should however still maintain a consistent &raidtab; file, since
you may need this file for later reconstruction of the array.
<P>
The persistent superblock is mandatory if you want auto-detection of
your RAID devices upon system boot. This is described in the
<BF>Autodetection</BF> section.
<P>

<SECT1>Chunk sizes
<P>
The chunk-size deserves an explanation.  You can never write
completely parallel to a set of disks. If you had two disks and wanted
to write a byte, you would have to write four bits on each disk,
actually, every second bit would go to disk 0 and the others to disk
1. Hardware just doesn't support that.  Instead, we choose some
chunk-size, which we define as the smallest ``atomic'' mass of data
that can be written to the devices.  A write of 16 KB with a chunk
size of 4 KB, will cause the first and the third 4 KB chunks to be
written to the first disk, and the second and fourth chunks to be
written to the second disk, in the RAID-0 case with two disks.  Thus,
for large writes, you may see lower overhead by having fairly large
chunks, whereas arrays that are primarily holding small files may
benefit more from a smaller chunk size.
<P>
Chunk sizes can be specified for all RAID levels except the Linear
mode.
<P>
For optimal performance, you should experiment with the value, as well
as with the block-size of the filesystem you put on the array.
<P>
The argument to the chunk-size option in &raidtab; specifies the
chunk-size in kilobytes. So ``4'' means ``4 KB''.
<P>
<SECT2>RAID-0
<P>
Data is written ``almost'' in parallel to the disks in the
array. Actually, <TT>chunk-size</TT> bytes are written to each disk,
serially.
<P>
If you specify a 4 KB chunk size, and write 16 KB to an array of three
disks, the RAID system will write 4 KB to disks 0, 1 and 2, in
parallel, then the remaining 4 KB to disk 0.
<P>
A 32 KB chunk-size is a reasonable starting point for most arrays. But
the optimal value depends very much on the number of drives involved,
the content of the filsystem you put on it, and many other factors.
Experiment with it, to get the best performance.
<P>
<SECT2>RAID-1
<P>
For writes, the chunk-size doesn't affect the array, since all data
must be written to all disks no matter what.  For reads however, the
chunk-size specifies how much data to read serially from the
participating disks.  Since all active disks in the array
contain the same information, reads can be done in a parallel RAID-0
like manner.
<P>
<SECT2>RAID-4
<P>
When a write is done on a RAID-4 array, the parity information must be
updated on the parity disk as well. The chunk-size is the size of the
parity blocks. If one byte is written to a RAID-4 array, then
<TT>chunk-size</TT> bytes will be read from the N-1 disks, the parity
information will be calculated, and <TT>chunk-size</TT> bytes written
to the parity disk.
<P>
The chunk-size affects read performance in the same way as in RAID-0,
since reads from RAID-4 are done in the same way.
<P>
<SECT2>RAID-5
<P>
On RAID-5 the chunk-size has exactly the same meaning as in
RAID-4.
<P>
A reasonable chunk-size for RAID-5 is 128 KB, but as always, you may
want to experiment with this.
<P>
Also see the section on special options for mke2fs.  This affects
RAID-5 performance.
<P>

<SECT1>Options for mke2fs
<P>
There is a special option available when formatting RAID-4 or -5
devices with mke2fs. The <TT>-R stride=nn</TT> option will allow
mke2fs to better place different ext2 specific data-structures in an
intelligent way on the RAID device.
<P>
If the chunk-size is 32 KB, it means, that 32 KB of consecutive data
will reside on one disk. If we want to build an ext2 filesystem with 4
KB block-size, we realize that there will be eight filesystem blocks
in one array chunk. We can pass this information on the mke2fs
utility, when creating the filesystem:
<VERB>
  mke2fs -b 4096 -R stride=8 /dev/md0
</VERB>
<P>
RAID-{4,5} performance is severely influenced by this option. I am
unsure how the stride option will affect other RAID levels. If anyone
has information on this, please send it in my direction.
<P>

<SECT1>Autodetection
<P>
Autodetection allows the RAID devices to be automatically recognized
by the kernel at boot-time, right after the ordinary partition
detection is done. 
<P>
This requires several things:
<ENUM>
<ITEM>You need autodetection support in the kernel. Check this
<ITEM>You must have created the RAID devices using persistent-superblock
<ITEM>The partition-types of the devices used in the RAID must be set to
   <BF>0xFD</BF>  (use fdisk and set the type to ``fd'')
</ENUM>
<P>
NOTE: Be sure that your RAID is NOT RUNNING before changing the
partition types.  Use <TT>raidstop /dev/md0</TT> to stop the device.
<P>
If you set up 1, 2 and 3 from above, autodetection should be set
up. Try rebooting.  When the system comes up, cat'ing &mdstat;
should tell you that your RAID is running.
<P>
During boot, you could see messages similar to these:
<VERB>
 Oct 22 00:51:59 malthe kernel: SCSI device sdg: hdwr sector= 512
  bytes. Sectors= 12657717 [6180 MB] [6.2 GB]
 Oct 22 00:51:59 malthe kernel: Partition check:
 Oct 22 00:51:59 malthe kernel:  sda: sda1 sda2 sda3 sda4
 Oct 22 00:51:59 malthe kernel:  sdb: sdb1 sdb2
 Oct 22 00:51:59 malthe kernel:  sdc: sdc1 sdc2
 Oct 22 00:51:59 malthe kernel:  sdd: sdd1 sdd2
 Oct 22 00:51:59 malthe kernel:  sde: sde1 sde2
 Oct 22 00:51:59 malthe kernel:  sdf: sdf1 sdf2
 Oct 22 00:51:59 malthe kernel:  sdg: sdg1 sdg2
 Oct 22 00:51:59 malthe kernel: autodetecting RAID arrays
 Oct 22 00:51:59 malthe kernel: (read) sdb1's sb offset: 6199872
 Oct 22 00:51:59 malthe kernel: bind<sdb1,1>
 Oct 22 00:51:59 malthe kernel: (read) sdc1's sb offset: 6199872
 Oct 22 00:51:59 malthe kernel: bind<sdc1,2>
 Oct 22 00:51:59 malthe kernel: (read) sdd1's sb offset: 6199872
 Oct 22 00:51:59 malthe kernel: bind<sdd1,3>
 Oct 22 00:51:59 malthe kernel: (read) sde1's sb offset: 6199872
 Oct 22 00:51:59 malthe kernel: bind<sde1,4>
 Oct 22 00:51:59 malthe kernel: (read) sdf1's sb offset: 6205376
 Oct 22 00:51:59 malthe kernel: bind<sdf1,5>
 Oct 22 00:51:59 malthe kernel: (read) sdg1's sb offset: 6205376
 Oct 22 00:51:59 malthe kernel: bind<sdg1,6>
 Oct 22 00:51:59 malthe kernel: autorunning md0
 Oct 22 00:51:59 malthe kernel: running: <sdg1><sdf1><sde1><sdd1><sdc1><sdb1>
 Oct 22 00:51:59 malthe kernel: now!
 Oct 22 00:51:59 malthe kernel: md: md0: raid array is not clean --
  starting background reconstruction 
</VERB>
This is output from the autodetection of a RAID-5 array that was not
cleanly shut down (eg. the machine crashed).  Reconstruction is
automatically initiated.  Mounting this device is perfectly safe,
since reconstruction is transparent and all data are consistent (it's
only the parity information that is inconsistent - but that isn't
needed until a device fails).
<P>
Autostarted devices are also automatically stopped at shutdown.  Don't
worry about init scripts.  Just use the /dev/md devices as any other
/dev/sd or /dev/hd devices.
<P>
Yes, it really is that easy.
<P>
You may want to look in your init-scripts for any raidstart/raidstop
commands. These are often found in the standard RedHat init
scripts. They are used for old-style RAID, and has no use in new-style
RAID with autodetection. Just remove the lines, and everything will be
just fine.
<P>

<SECT1>Booting on RAID
<P>
This will be added in near future.
<P>
The really really short nano-howto goes:
<ITEMIZE>
<ITEM>Put two identical disks in a system.
<ITEM> Put in a third disk, on which you install a complete Linux system.
<ITEM> Now set up the two identical disks each with a /boot,
   swap and / partition.
<ITEM> Configure RAID-1 on the two / partitions.
<ITEM> Copy the entire installation from the third disk to the RAID.
   (just using tar, no raw copying !)
<ITEM> Set up the /boot on the first disk.  Run lilo.  You probably 
   want to set the root fs device to be  900, since LILO doesn't 
   really handle the /dev/md devices.  /dev/md0 is major 9 minor 9,
   so root=900 should work.
<ITEM> Set up /boot on the second disk just like you did on the first.
<ITEM> In the bios, in the case of IDE disks, set the disk types to
   autodetect.  In the fstab, make sure you are not mounting any of
   the /boot filesystems. You don't need them, and in case of device
   failure, you will just get stuck in the boot sequence when trying
   to mount a non-existing device.
<ITEM> Try booting on just one of the disks. Try booting on the other disk
   only. If this works, you're up and running.
<ITEM> Document what you did, mail it to me, and I'll put it in here.
</ITEMIZE>

<SECT1>Pitfalls
<P>
Never NEVER <BF>never</BF> re-partition disks that are part of a running
RAID. If you must alter the partition table on a disk which is a part
of a RAID, stop the array first, then repartition.
<P>
It is easy to put too many disks on a bus. A normal Fast-Wide SCSI bus
can sustain 10 MB/s which is less than many disks can do alone
today. Putting six such disks on the bus will of course not give you
the expected performance boost.
<P>
More SCSI controllers will only give you extra performance, if the
SCSI busses are nearly maxed out by the disks on them.  You will not
see a performance improvement from using two 2940s with two old SCSI
disks, instead of just running the two disks on one controller.
<P>
If you forget the persistent-superblock option, your array may not
start up willingly after it has been stopped.  Just re-create the
array with the option set correctly in the raidtab.
<P>
If a RAID-5 fails to reconstruct after a disk was removed and
re-inserted, this may be because of the ordering of the devices in the
raidtab. Try moving the first ``device ...'' and ``raid-disk ...''
pair to the bottom of the array description in the raidtab file.
<P>

<SECT>Credits
<P>
The following people contributed to the creation of this
documentation:
<ITEMIZE>
<ITEM>Ingo Molnar
<ITEM>Jim Warren
<ITEM>Louis Mandelstam
<ITEM>Allan Noah
<ITEM>Yasunori Taniike
<ITEM>The Linux-RAID mailing list
<ITEM>The ones I forgot,  sorry   :)
</ITEMIZE>
<P>
Please submit corrections, suggestions etc. to the author. It's the
only way this HOWTO can improve.

</ARTICLE>