Skip to content

RAID and mdadm

mdadm --examine --scan --config=partitions > /tmp/mdadm.conf
mdadm --assemble --scan --config=/tmp/mdadm.conf

0. Learn About RAID

Interactive RAID Tutorial | JetStor

RAID 0, RAID 1, RAID 5, RAID 10 Explained with Diagrams

Software Vs Hardware RAID

What are the different RAID levels for Linux / UNIX and Windows Server?

Linux Raid

Important for Setting up RAID

0.5 Building Existing RAID Array:

mdadm --build --verbose --chunk=64K /dev/md1 --level=0 --raid-devices=2 /dev/sdh /dev/sdi

# Chunk option may not be that important, unsure
# Or

mdadm --add /dev/md0 /dev/sdg1
mdadm --grow /dev/md0 -n 5
fsck.ext3 -f /dev/md0
resize2fs /dev/md

1. New RAID Array

mdadm --zero-superblock /dev/sda  /dev/sdb    # remove old raid, not sure if sda1, sdb1, etc are to be executed.

mdadm --create --verbose /dev/md0 --level=1 --raid-disks=2 /dev/sda1 /dev/sdb1
# mdadm --assemble /dev/md3 /dev/sda2 /dev/sdb2 #not really required
# OR
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mkfs.ext4 /dev/md0

2. /etc/mdadm.conf

/etc/mdadm.conf or /etc/mdadm/mdadm.conf (on debian)

After we create our RAID arrays we add them to this file using:

mdadm --detail --scan >> /etc/mdadm.conf

or on debian

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

3. Remove a disk from an array

mdadm --fail /dev/md0 /dev/sda1
mdadm --remove /dev/md0 /dev/sda1

# OR

One Command:   mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1

# OR
mdadm --manage --set-faulty /dev/md0 /dev/sdc2
mdadm /dev/md1 --remove /dev/sdc2

4. Add a disk to an existing array

We can add a new disk to an array (replacing a failed one probably):

mdadm --add /dev/md0 /dev/sdc1
mdadm --grow --raid-devices=3 /dev/md0

# If md0 is in RAID-1 and has 2 drives, and this 2nd command is not executed,
# It will be added as Spare (S), which takes place in case a drive fails

5. Verifying the status of the RAID arrays

We can check the status of the arrays on the system with:

cat /proc/mdstat #or
mdadm --detail /dev/md0

The output of this command will look like:

cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]

md1 : active raid1 sdb3[1] sda3[0]
19542976 blocks [2/2] [UU]

md2 : active raid1 sdb4[1] sda4[0]
223504192 blocks [2/2] [UU]

Working fine: U. Failed drive: F, while a degraded array will miss the second disk RAID rebuild operation using watch can be useful:

watch cat /proc/mdstat

6. Stop and delete a RAID array

If we want to completely remove a raid array we have to stop if first and then remove it:

mdadm --stop /dev/md0
mdadm --remove /dev/md0

Delete the superblock: (which may ensure

mdadm --zero-superblock /dev/sda
mdadm --zero-superblock /dev/sdb1

Finally in using RAID1 arrays, where we create identical partitions on both drives this can be useful to copy the partitions from sda to sdb:

sfdisk -d /dev/sda | sfdisk /dev/sdb
# OR
sfdisk -d /dev/sda | sfdisk --force /dev/sdb

partporbe

(this will dump the partition table of sda, removing completely the existing partitions on sdb, so be sure you want this before running this command, as it will not warn you at all).

7. SWAP on RAID

Why RAID? - SWAP

8. Speed Up RAID building-up Rebuilding And Re-syncing

# echo 50000 > /proc/sys/dev/raid/speed_limit_min
# sysctl -w dev.raid.speed_limit_min=50000

#1: /proc/sys/dev/raid/{speed_limit_max,speed_limit_min} kernel variables

The /proc/sys/dev/raid/speed_limit_min is config file that reflects the current "goal" rebuild speed for times when non-rebuild activity is current on an array. The speed is in Kibibytes per second (1 kibibyte = 210 bytes = 1024 bytes), and is a per-device rate, not a per-array rate . The default is 1000. The /proc/sys/dev/raid/speed_limit_max is config file that reflects the current "goal" rebuild speed for times when no non-rebuild activity is current on an array. The default is 100,000. To see current limits, enter:

# sysctl dev.raid.speed_limit_min
# sysctl dev.raid.speed_limit_max

To increase speed, enter:

echo value > /proc/sys/dev/raid/speed_limit_min
OR
sysctl -w dev.raid.speed_limit_min=value

In this example, set it to 50000 K/Sec, enter:

# echo 50000 > /proc/sys/dev/raid/speed_limit_min
OR
# sysctl -w dev.raid.speed_limit_min=50000

If you want to override the defaults you could add these two lines to /etc/sysctl.conf:

#################NOTE ################
##  You are limited by CPU and memory too #
###########################################
dev.raid.speed_limit_min = 50000
## good for 4-5 disks based array ##
dev.raid.speed_limit_max = 2000000
## good for large 6-12 disks based array ###
dev.raid.speed_limit_max = 5000000

#2: Set read-ahead option

Set readahead (in 512-byte sectors) per raid device. The syntax is:

# blockdev --setra 65536 /dev/mdX
## Set read-ahead to 32 MiB ##
# blockdev --setra 65536 /dev/md0
# blockdev --setra 65536 /dev/md1

#3: Set stripe-cache_size for RAID5 or RAID 6

This is only available on RAID5 and RAID6 and boost sync performance by 3-6 times. It records the size (in pages per device) of the stripe cache which is used for synchronising all write operations to the array and all read operations if the array is degraded. The default is 256. Valid values are 17 to 32768. Increasing this number can increase performance in some situations, at some cost in system memory. Note, setting this value too high can result in an "out of memory" condition for the system. Use the following formula:

memory_consumed = system_page_size * nr_disks * stripe_cache_size

To set stripe_cache_size to 16 MiB for /dev/md0, type:

# echo 16384 > /sys/block/md0/md/stripe_cache_size

To set stripe_cache_size to 32 MiB for /dev/md3, type:

# echo 32768 > /sys/block/md3/md/stripe_cache_size

Tip #4: Disable NCQ on all disks

The following will disable NCQ on /dev/sda,/dev/sdb,..,/dev/sde using bash for loop

 for i in sd[abcde]
do
  echo 1 > /sys/block/$i/device/queue_depth
done

Tip #5: Bitmap Option

Bitmaps optimize rebuild time after a crash, or after removing and re-adding a device. Turn it on by typing the following command:

# mdadm --grow --bitmap=internal /dev/md0

Once array rebuild or fully synced, disable bitmaps:

# mdadm --grow --bitmap=none /dev/md0

Source:

http://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html
https://raid.wiki.kernel.org/index.php/Growing    
https://wiki.archlinux.org/index.php/RAID
http://keck.ucsf.edu/~idl/CustomSoftware/Software-Raid-Insructions.html

LVM and RAID Sources:

https://wiki.archlinux.org/index.php/Software_RAID_and_LVM
http://www.gagme.com/greg/linux/raid-lvm.php
http://www.howtoforge.com/setting-up-lvm-on-top-of-software-raid1-rhel-fedora
http://www.howtoforge.com/set-up-raid1-on-a-running-lvm-system-debian-etch

Troubleshooting:

https://raid.wiki.kernel.org/index.php/Tweaking,_tuning_and_troubleshooting

Failure and Rebuild:

http://www.cyberciti.biz/faq/howto-rebuilding-a-raid-array-after-a-disk-fails/  ## RAID-5
https://raid.wiki.kernel.org/index.php/RAID_Recovery
http://www.unix.com/filesystems-disks-memory/199139-reconstructing-raid.html
Back to top