Rebuilding softraid
As time goes on, one of the hard disks in your RAID array may fail. When this happens, bioctl shows the array is now degraded and must be built immediately to avoid the risk of data loss.
In this example, I set up a RAID array of two identically sized, 20GB disks, sd0 and sd1, to create a RAID array on sd2. Hard disk sd1 was replaced so that the RAID array must now be rebuilt:
# bioctl sd2 Volume Status Size Device softraid0 0 Degraded 21474533376 sd2 RAID1 0 Online 21474533376 0:0.0 noencl <sd0a> 1 Offline 0 0:1.0 noencl <>
WARNING: Be very careful to double check all commands before typing them. Pay special attention to the device names -- typing the wrong device could delete all your data!
WARNING: You may want to immediately backup all your data, to avoid any risk of data loss.
First, we replace the failed hard disk sd1
, then we recreate the
ifdisk partitions:
# fdisk -iy sd1
Then, we recreate the disklabel layout based on the working disk
sd0
:
# disklabel sd0 > layout # disklabel -R sd1 layout # rm layout
Assuming the RAID partition is on sd1a
, we rebuild the mirror with
this command:
# bioctl -R /dev/sd1a sd2 softraid0: rebuild of sd2 started on sd1a
We can check on the progress of the rebuild:
# bioctl sd2 Volume Status Size Device softraid0 0 Rebuild 21474533376 sd2 RAID1 3% done 0 Online 21474533376 0:0.0 noencl <sd0a> 1 Rebuild 21474533376 0:1.0 noencl <sd1a>
Once done, bioctl should show this output:
# bioctl sd2 Volume Status Size Device softraid0 0 Online 21474533376 sd2 RAID1 0 Online 21474533376 0:0.0 noencl <sd0a> 1 Online 21474533376 0:1.0 noencl <sd1a>