How To Resize RAID Partitions (Shrink & Grow) (Software RAID)

https://www.howtoforge.com/how-to-resize-raid-partitions-shrink-and-grow-software-raid

Version 1.0
Author: Falko Timme
Last edited 11/24/2008

This article describes how you can shrink and grow existing software RAID partitions. I have tested this with non-LVM RAID1 partitions that use ext3 as the file system. I will describe this procedure for an intact RAID array and also a degraded RAID array.

If you use LVM on your RAID partitions, the procedure will be different, so do not use this tutorial in this case!

I do not issue any guarantee that this will work for you!

1 Preliminary Note

A few days ago I found out that one of my servers had a degraded RAID1 array (/dev/md2, made up of /dev/sda3 and /dev/sdb3; /dev/sda3 had failed, /dev/sdb3 was still active):

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[1]
4594496 blocks [2/1] [_U]

md1 : active raid1 sda2[0] sdb2[1]
497920 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
144448 blocks [2/2] [UU]

unused devices:
server1:~#

I tried to fix it (using this tutorial), but unfortunately at the end of the sync process (with 99.9% complete), the sync stopped and started over again. As I found out, this happened because there were some defect sectors at the end of the (working) partition /dev/sdb3 – this was in /var/log/kern.log:

Nov 22 18:51:06 server1 kernel: sdb: Current: sense key: Aborted Command
Nov 22 18:51:06 server1 kernel: end_request: I/O error, dev sdb, sector 1465142856

So this was the worst case that could happen – /dev/sda dead and /dev/sdb about to die. To fix this, I imagined I could shrink /dev/md2 so that it leaves out the broken sectors at the end of /dev/sdb3, then add the new /dev/sda3 (from the replaced hard drive) to /dev/md2, let the sync finish, remove /dev/sdb3 from the array and replace /dev/sdb with a new hard drive, add the new /dev/sdb3 to /dev/md2, and grow /dev/md2 again.

This is one of the use cases for the following procedures (I will describe the process for an intact array and a degraded array).

Please note that /dev/md2 is my system partition (mount point /), so I had to use a rescue system (e.g. Knoppix Live-CD) to resize the array. If the array you want to resize is not your system partition, you probably don’t need to boot into a rescue system; but in either case, make sure that the array is unmounted!

2 Intact Array

I will describe how to resize the array /dev/md2, made up of /dev/sda3 and /dev/sdb3.

2.1 Shrinking An Intact Array

Boot into your rescue system and activate all needed modules:

modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Then activate your RAID arrays:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm –examine –scan >> /etc/mdadm/mdadm.conf

mdadm -A –scan

Run

e2fsck -f /dev/md2

to check the file system.

/dev/md2 has a size of 40GB; I want to shrink it to 30GB. First we have to shrink the file system with resize2fs; to make sure that the file system fits into the 30GB, we make it a little bit smaller (25GB) so we have a little security margin, shrink /dev/md2 to 30GB, and the resize the file system (again with resize2fs) to the max. possible value:

resize2fs /dev/md2 25G

Now we shrink /dev/md2 to 30GB. The –size value must be in KiBytes (30 x 1024 x 1024 = 31457280); make sure it can be divided by 64:

mdadm –grow /dev/md2 –size=31457280

Next we grow the file system to the largest possible value (if you don’t specify a size, resize2fs will use the largest possible value)…

resize2fs /dev/md2

… and run a file system check again:

e2fsck -f /dev/md2

That’s it – you can now boot into the normal system again.

2.2 Growing An Intact Array

Boot into your rescue system and activate all needed modules:

modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Then activate your RAID arrays:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm –examine –scan >> /etc/mdadm/mdadm.conf

mdadm -A –scan

Now we can grow /dev/md2 as follows:

mdadm –grow /dev/md2 –size=max

–size=max means the largest possible value. You can as well specify a size in KiBytes (see previous chapter).

Then we run a file system check…

e2fsck -f /dev/md2

…, resize the file system…

resize2fs /dev/md2

… and check the file system again:

e2fsck -f /dev/md2

Afterwards you can boot back into your normal system.

Resize (grow) mdadm RAID1 device

Resize (grow) mdadm RAID1 device

I decided to rearrange space a bit on the new CentOS server I am setting up to replace the ageing Fedora 8 setup currently in use…

When I installed it (three weeks ago), I split the two 500GB hard drives into 3 partitions and 2 raid levels to suit my needs:

  • 100GB RAID1 /md0 made up of /dev/sda1 and /dev/sdb1 (100GB on each disk) – mounted as / – this will store the OS and all important files so redundancy is a must
  • 150GB RAID0 /md1 made up of /dev/sda2 and /dev/sdb2 (75GB on each disk) – mounted as /vz – (testing) virtual machines dedicated space, speed is a must, redundancy is not needed
  • 574GB (remaining space) RAID0 /md2 made up of /dev/sda3 and /dev/sdb3 (287GB on each disk) – mounted as /down for file storage – non-important big files, speed and space needed, redundancy not a requirement

In the meantime I realized I reserved too much space for the virtual machines, so I decided to reduce it with 50GB (25GB on each drive) and transfer that space to the md0 device (RAID1).

Since both md1 and md2 are currently empty, and the difficulties of finding information about resizing RAID0 devices (might not even be possible), the first step for me was to delete the md1 device and its partitions (to make room where md0 would grow).

I initially tried to follow a tutorial which suggested I use
mdadm --grow /dev/md0 --size=XXXXX
to directly resize the RAID device using mdadm, but this failed for me with the “Not enough space on the device” error. This is easily confirmed using mdadm –examine /dev/sda1 (and sda2)
/dev/sda1:
...
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 209715056 (100.00 GiB 107.37 GB) <-- maximum size is
Array Size : 209715056 (100.00 GiB 107.37 GB) <-- equal to current size
Super Offset : 209715184 sectors
State : clean
...

So I decided to do things my own way…

1. Prerequisites

Use a linux rescue system – SystemRescueCD is my favorite, but this time I used the PartedMagic live image (network booted) for its loading speed.

Backups are a must! Always back up when performing filesystem operations.

We will be using both the GUI GParted and the console mdadm tool.

We will be working on /dev/md0.

2. The steps

Boot into the rescue system. Make sure the raid device is not assembled (PartedMagic does not assemble RAID devices automatically, while SystemRescueCD does, as md12x)

Assemble  the raid device:
mdadm --examine --scan
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1

We will be resizing sda1 first, so we need to fail it and remove it from the raid:
mdadm --fail /dev/md0 /dev/sda1
mdadm --remove /dev/md0 /dev/sda1

Now stop the raid device:
mdadm -S /dev/md0

Open GParted and resize the sda1 partition to its new size. After the resize the partition showed wrong used/free space values. We can ignore that for now.

Re-add sda1 to the raid:
mdadm --add /dev/md0 /dev/sda1

Now wait for it to sync. You can monitor the progress with:
cat /proc/mdstat
md0 : active raid1 sda1[2] sdb1[1]
104857528 blocks super 1.0 [2/1] [_U]
[===============>.....]  recovery = 79.9% (83821376/104857528) finish=4.1min speed=83548K/sec
bitmap: 0/1 pages [0KB], 65536KB chunk

Now that we have sda1 as the good resized drive we fail sdb1 (to be able to resize it in GParted):
mdadm --fail /dev/md0 /dev/sdb1
mdadm --remove /dev/md0 /dev/sdb1

Resize the sdb1 partition in GParted. Then re-add it to md0 (data will sync once again to make the RAID1 device consistent). Once the sync is done, we can finally grow the md0 volume to its new size.

Checking mdadm –examine /dev/sda1 (and sdb1) shows the raid device has room to grow in:
/dev/sda1:
...
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 262143856 (125.00 GiB 134.22 GB) <-- available size is
Array Size : 209715056 (100.00 GiB 107.37 GB) <-- bigger than array size
Used Dev Size : 209715056 (100.00 GiB 107.37 GB)
Super Offset : 262143984 sectors
State : clean
...

Disable write-intent bitmap on the raid device (otherwise resize is not allowed):
mdadm --grow /dev/md0 --bitmap=none

We can finally grow the md0 device using:
mdadm --grow /dev/md0 --size=max <-- max=maximum allowed, in my case the desired 125GB

The device resyncs one more time.

Re-enable write-intent bitmap:
mdadm --grow /dev/md0 --bitmap=internal

Then perform a filesystem scan:
e2fsck -f /dev/md0

And the final step is to grow the filesystem as well (so we can actually use the new size):
resize2fs /dev/md0

Done. Reboot into the normal system.

All that’s left now for me is to recreate the deleted md1 RAID0.

http://serverfault.com/questions/320310/how-to-resize-raid1-array-with-mdadm