How to resize RAID1 array with mdadm?

I’ve running Ubuntu 11.04 (kernel 2.6.38-11). I replaced 2x160GB with 2x500GB drives. They are configured as RAID1.

The partition tables show the right sizes. Here’s sfdisk:

# sfdisk -d /dev/sdb # partition table of /dev/sdb unit: sectors  /dev/sdb1 : start=       63, size=   192717, Id=fd, bootable /dev/sdb2 : start=   192780, size=  7807590, Id=fd /dev/sdb3 : start=  8000370, size=968767695, Id=fd /dev/sdb4 : start=        0, size=        0, Id= 0 

And fdisk:

# fdisk -l /dev/sdb  Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006c78f     Device Boot      Start         End      Blocks   Id  System /dev/sdb1   *           1          12       96358+  fd  Linux raid autodetect /dev/sdb2              13         498     3903795   fd  Linux raid autodetect /dev/sdb3             499       60801   484383847+  fd  Linux raid autodetect 

But I’m not seeing the new space:

root@green:~# df -h Filesystem            Size  Used Avail Use% Mounted on /dev/md2              143G  134G  8.3G  95% /   root@green:~# mdadm --examine /dev/sdb3 /dev/sdb3:           Magic : a92b4efc         Version : 0.90.00            UUID : b8f83980:f60d820c:74c46fbf:0baa68bc   Creation Time : Sun Mar 29 18:48:46 2009      Raid Level : raid1   Used Dev Size : 152247936 (145.19 GiB 155.90 GB)      Array Size : 152247936 (145.19 GiB 155.90 GB)    Raid Devices : 2   Total Devices : 2 Preferred Minor : 2      Update Time : Mon Oct 10 19:22:36 2011           State : clean  Active Devices : 2 Working Devices : 2  Failed Devices : 0   Spare Devices : 0        Checksum : 7b5debb7 - correct          Events : 10729526         Number   Major   Minor   RaidDevice State this     0       8       19        0      active sync   /dev/sdb3    0     0       8       19        0      active sync   /dev/sdb3    1     1       8        3        1      active sync   /dev/sda3 

I tried mdadm and resize2fs:

# mdadm --grow /dev/md2 --size=max mdadm: component size of /dev/md2 has been set to 152247936K  # resize2fs /dev/md2  resize2fs 1.41.14 (22-Dec-2010) The filesystem is already 38061984 blocks long.  Nothing to do! 

Any ideas?

Added per request

# cat /proc/mdstat  Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]  md2 : active raid1 sdb3[0] sda3[1]       152247936 blocks [2/2] [UU]  md0 : active raid1 sdb1[0] sda1[1]       96256 blocks [2/2] [UU]  md1 : active raid1 sdb2[0] sda2[1]       3903680 blocks [2/2] [UU]  unused devices: <none> 

partitions

# cat /proc/partitions  major minor  #blocks  name     8        0  488386584 sda    8        1      96358 sda1    8        2    3903795 sda2    8        3  152248005 sda3    8       16  488386584 sdb    8       17      96358 sdb1    8       18    3903795 sdb2    8       19  152248005 sdb3    9        1    3903680 md1    9        0      96256 md0    9        2  152247936 md2 

parted:

# parted GNU Parted 2.3 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print all                                                         Model: ATA WDC WD5000AAKX-0 (scsi) Disk /dev/sda: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos  Number  Start   End     Size    Type     File system     Flags  1      32.3kB  98.7MB  98.7MB  primary  ext3            boot, raid  2      98.7MB  4096MB  3997MB  primary  linux-swap(v1)  raid  3      4096MB  500GB   496GB   primary  ext3            raid   Model: ATA WDC WD5000AAKS-4 (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos  Number  Start   End     Size    Type     File system     Flags  1      32.3kB  98.7MB  98.7MB  primary  ext3            boot, raid  2      98.7MB  4096MB  3997MB  primary  linux-swap(v1)  raid  3      4096MB  500GB   496GB   primary  ext3            raid   Model: Linux Software RAID Array (md) Disk /dev/md1: 3997MB Sector size (logical/physical): 512B/512B Partition Table: loop  Number  Start  End     Size    File system     Flags  1      0.00B  3997MB  3997MB  linux-swap(v1)   Model: Linux Software RAID Array (md) Disk /dev/md0: 98.6MB Sector size (logical/physical): 512B/512B Partition Table: loop  Number  Start  End     Size    File system  Flags  1      0.00B  98.6MB  98.6MB  ext3   Model: Linux Software RAID Array (md) Disk /dev/md2: 156GB Sector size (logical/physical): 512B/512B Partition Table: loop  Number  Start  End    Size   File system  Flags  1      0.00B  156GB  156GB  ext3 

Comment via email:

the problem is in metadata, you just need to assemble raid array with parameter –update devicesize

and after that -G /dev/md? -z max will done job 🙂

shareimprove this question
 
   
What is the output of cat /proc/mdstat ? How about cat /proc/partitions ? –  Steven Monday Oct 11 ’11 at 0:41
   
Added output above. –  Paul Schreiber Oct 11 ’11 at 1:00
   
You haven’t mentioned how did you get your data copied onto new disks. However, it can influence the answer heavily. –  poige Oct 11 ’11 at 1:16
   
I got the data copied on the new disks by partitioning them with sfdisk, using mdadm –add and letting the data sync over. –  Paul Schreiber Oct 11 ’11 at 1:29
   
@Paul Schreiber, sfdisk you say(?)… Do you mean by saying that that you copied disk partitioning schema as well? –  poige Oct 11 ’11 at 4:59

 

 

1

I regularly use mdadm and consider it one of the most dangerous Linux utilities. However, if you employ the correct safety precautions you can avoid most cases of potential data loss Backup all your data!!! i have been bitten twice by mdadm in the past, lost over 700GB of data and very very little of it was able to be recovered, you have been warned.

There is a very good chance you will need to create the RAID array again as mdadm does not expect or compensate for drives suddenly increasing in size. It will use the size stated in the raid superblock and not the drive itself. Provided that the drives are already synced, you shouldn’t have many problems.

Remember if you want to boot of it use superblock version 0.9.

Edit

This is how i would do it, untested!

Create a RAID1 with a missing dive just so we can quickly test that the data remains while still having another drive with a copy of the data, your old metadata was 0.90 so we will keep the same version here.

mdadm --create /dev/md2 --level=mirror --metadata=0.90 --raid-devices=2 missing /dev/sdb3 

Mount it to test that everything works

mkdir /mnt/test mount /dev/md2 /mnt/test 

check your data

   ls -l /mnt/test 

If it all looks ok then unmount the drive and resize.

unmount /mnt/md2 resize2fs /dev/md2 

Once that is all ok you can add the other drive to the array.

mdadm --add /dev/md2 /dev/sdb3 

and wait for the drives to resync

cat /proc/mdstat

shareimprove this answer
 
   
Do you have any specific suggestions? i.e. steps for me to take? –  Paul Schreiber Oct 10 ’11 at 23:48
   
look at my edit above –  Silverfire Oct 11 ’11 at 0:14
   
/dev/md2 already exists. Why do I want to re-create it? And: I’d have to boot off a rescue disc to make this happen? Is there any way I can resize this live? –  Paul Schreiber Oct 11 ’11 at 0:50
   
Live, maybe not but you may be able to do it with a restart, if you remove the primary boot drive from the array you can then create a new array on it (name it /dev/md3 or something) with a missing drive, the system will then boot of the new drive/raid array and then you can add the old one –  Silverfire Oct 11 ’11 at 3:54
   
What would probably be easiest though is to just restore from backup on a new array. –  Silverfire Oct 11 ’11 at 3:57
Scout, the No. 1 choice for webscale monitoring

 

1

Just use

mdadm --grow --size max /dev/md2 

Then you’ll be able to use

resize2fs 

To let file system match the raid size. All of that is done online without even unmounting the md2.

shareimprove this answer
  

 

0

From looking at /proc/partitions, it’s apparent that linux thinks sda3 and sdb3 are smaller than they are.

Sum the sizes of the partitions

8       17      96358 sdb1 8       18    3903795 sdb2 8       19  152248005 sdb3 

and you’ll get a number much lower than the size of the disk.

8       16  488386584 sdb 

152248005 blocks of 1024 bytes is consistent with the size mdadm --grow and resize2fs are reporting for md2.

Did you initially create these partitions with a smaller size, then later recreate them to utilize the rest of the disks? If so, rebooting should allow the kernel to re-read the partition table. After that, growing the RAID device and resizing the filesystem should work.

 

How to resize RAID1 array with mdadm?