• Help & contact
    • Spring Offers
      %

    This article explains how to rebuild a software RAID after replacing a defective hard disk.

    Attention

    These instructions are only valid for Dedicated Servers that use BIOS as the interface between the hardware and the operating system. If you are using a Dedicated Server that uses UEFI as the interface between the hardware and the operating system, see the following article for information about rebuilding software RAID:

    Rebuilding Software RAID (Linux/Dedicated Server with UEFI)

    Checking Whether a Dedicated Server Uses UEFI or BIOS

    To check whether your server uses BIOS or UEFI as the interface between the hardware and the operating system, issue the following command:

    [root@localhost ~]# [ -d /sys/firmware/efi ] && echo UEFI || echo BIOS

    Important Information About Partitioning Your Dedicated Server

    On Dedicated Servers that are managed in the Cloud Panel, only one partition is created during setup and when the operating system is reinstalled as of 10/20/2021. On Dedicated Servers that were set up before this date and on Dedicated Servers that are acquired as part of a Server Power Deal, the operating system images are equipped with the Logical Volume Manager (LVM). The Logical Volume Manager places a logical layer between the file system and the partitions of the disks in use. This makes it possible to create a file system that spans multiple partitions and/or disks. In this way, for example, the storage space of several partitions or disks can be combined.

    Determining the Information Needed to Rebuild the Software RAID

    List existing hard disks and partitions

    To list the existing disks and partitions, do the following:

    • Log in to the server with your root account.

    • To list the existing disks and partitions, enter the command fdisk -l. fdisk is a command line program for partitioning disks. With this program, you can view, create, or delete partitions.

      [root@localhost ~]# fdisk -l

    • Note the existing disks, partitions and the paths of the swap files.

    Please Note

    After the hard disk has been replaced, it is possible that it will be recognized as sdc. This always happens when replacing the hard disk via hot swap. Only a reboot helps here so that the hard disk is recognized as sda or sdb again.

    Determining Mountpoints
    • To display the mount points of the devices and partitions you are using, enter the following command:

      [root@localhost ~]# lsblk

      The following information, for example, is then displayed:

      root@s20776641:~# lsblk
      NAME            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
      loop1             7:1    0  54.9M  1 loop  /snap/lxd/12631
      loop2             7:2    0 110.6M  1 loop  /snap/core/12834
      loop3             7:3    0  61.9M  1 loop  /snap/core20/1434
      loop4             7:4    0  80.4M  1 loop  /snap/lxd/23037
      sda               8:0    0 931.5G  0 disk
      ├─sda1            8:1    0     4G  0 part
      │ └─md1           9:1    0     4G  0 raid1 /
      ├─sda2            8:2    0     2G  0 part  [SWAP]
      └─sda3            8:3    0 925.5G  0 part
        └─md3           9:3    0 925.5G  0 raid1
          ├─vg00-usr  253:0    0     5G  0 lvm   /usr
          ├─vg00-var  253:1    0     5G  0 lvm   /var
          └─vg00-home 253:2    0     5G  0 lvm   /home
      sdb               8:16   0 931.5G  0 disk
      ├─sdb1            8:17   0     4G  0 part
      │ └─md1           9:1    0     4G  0 raid1 /
      ├─sdb2            8:18   0     2G  0 part  [SWAP]
      └─sdb3            8:19   0 925.5G  0 part
        └─md3           9:3    0 925.5G  0 raid1
          ├─vg00-usr  253:0    0     5G  0 lvm   /usr
          ├─vg00-var  253:1    0     5G  0 lvm   /var
          └─vg00-home 253:2    0     5G  0 lvm   /home
      root@s20776641:~# cat /proc/mdstat

    • Note the devices and partitions and their mount points.

    Example Scenario

    This tutorial assumes the following configuration:

    root@s20776641:~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md3 : active raid1 sdb3[1] sda3[0]
          970470016 blocks [2/2] [UU]
     
    md1 : active raid1 sdb1[1] sda1[0]
          4194240 blocks [2/2] [UU]

     

    # cat /proc/mdstat
    Personalities : [raid1]
    md1 : active raid1 sda1[0] sdb1[1]
    4194240 blocks [2/2] [UU]
    
    md3 : active raid1 sda3[0] sdb3[1]
    1458846016 blocks [2/2] [UU]
    bash

    There are 2 arrays:

    /dev/md1 as /

    /dev/md3 for the log.

    The partitions are the following:

    /var /usr /home

    Typically, there are still two swap partitions with sda2 and sdb2 which do not belong to the RAID.

    Restoring the RAID

    The rest of the procedure depends on whether hard disk 1 (sda) or hard disk 2 (sdb) was replaced:

    Hard disk 1 (sda) was replaced

    If hard disk 1(sda) was replaced, you must check whether it was recognized correctly. You may need to perform a reboot. Then boot the server into the rescue system and perform the steps listed below.

    • First copy the partition tables to the new (empty) hard disk:

      [root@host ~]# sfdisk -d /dev/sdb | sfdisk /dev/sda

      (If necessary you have to use the --force option)

    • Add the partitions to the RAID:

      [root@host ~]# mdadm /dev/md1 -a /dev/sda1

      [root@host ~]# mdadm /dev/md3 -a /dev/sda3

      You can now follow the rebuild of the RAID with cat /proc/mdstat.

    • Then mount the partitions var, usr and home:

      [root@host ~]# mount /dev/md1 /mnt
      [root@host ~]# mount /dev/mapper/vg00-var /mnt/var
      [root@host ~]# mount /dev/mapper/vg00-usr /mnt/usr
      [root@host ~]# mount /dev/mapper/vg00-home /mnt/home

    • To install Grub later without errors, mount proc, sys and dev:

      [root@host ~]# mount -o bind /proc /mnt/proc
      [root@host ~]# mount -o bind /sys /mnt/sys
      [root@host ~]# mount -o bind /dev /mnt/dev

    • After mounting the partitions, jump into the chroot environment and install the grub bootloader:

      [root@host ~]# chroot /mnt
      [root@host ~]# grub-install /dev/sda

    • Exit Chroot with Exit and unmount all disks again:

      [root@host ~]# umount -a

      Wait until the rebuild process is complete and then boot the server back into the normal system.

    • Finally, you must now enable the swap partition using the following commands:

      [root@host ~]# mkswap /dev/sda2
      [root@host ~]# swapon -a

    Hard disk 2 (sdb) was exchanged

    If hard disk 2(sdb) has been replaced, proceed as follows:

    • Perform a reboot so that hard disk 2 (sdb) is displayed.

    • In the local system, copy the partition tables to the new (empty) hard disk:

      [root@host ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb

      (If necessary, you must use the --force option)

    • Add the partitions to the RAID:

      [root@host ~]# mdadm /dev/md1 -a /dev/sdb1

      [root@host ~]# mdadm /dev/md3 -a /dev/sdb3

      You can now follow the rebuild of the RAID with cat /proc/mdstat.

    • Finally, you must now enable the swap partition using the following commands:

      [root@host ~]# mkswap /dev/sdb2