如何从 mdadm 阵列中完全删除“已删除”磁盘

问题描述 投票:0回答:1

问题的开始

我在托管提供商上有一个专用服务器,最近我的节点导出器在我的 RAID 1 阵列 /dev/md3 上检测到高磁盘 io 饱和度。我检查了我的硬盘的 smartctl,阵列中的两个驱动器都显示大量读取错误:

[root@ovh-ds03 ~]# smartctl /dev/sda -a | grep Err
Error logging capability:        (0x01) Error logging supported.
     SCT Error Recovery Control supported.
  1 Raw_Read_Error_Rate     0x000b   099   099   016    Pre-fail  Always       -       65538
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0



[root@ovh-ds03 ~]# smartctl /dev/sdb -a | grep Err
Error logging capability:        (0x01) Error logging supported.
     SCT Error Recovery Control supported.
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       65536
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0

我要求抛出支持票来更换 2 个磁盘,但没有更换 2 个磁盘,而是添加了更多磁盘,并在这 2 个新磁盘上重建了阵列。一切都很好,但现在阵列处于降级状态,我有警报,因为它名为

️NodeRAIDDegraded
,检查服务器,是的,它处于降级状态:

[root@ovh-ds03 ~]# mdadm --detail /dev/md3
/dev/md3:
           Version : 1.2
     Creation Time : Sat Mar 30 18:18:26 2024
        Raid Level : raid1
        Array Size : 1951283200 (1860.89 GiB 1998.11 GB)
     Used Dev Size : 1951283200 (1860.89 GiB 1998.11 GB)
      Raid Devices : 4
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Sep 14 19:30:44 2024
             State : active, degraded
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : md3
              UUID : 939ad077:07c22e9e:ae62fbf9:4df58cf9
            Events : 55337

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       -       0        0        1      removed
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3

如何解决?

我尝试过测试从头开始重建数组等的各种解决方案

mdadm --assemble --scan
linux disk hard-drive raid mdadm
1个回答
0
投票

为了修复它并删除

removed
磁盘,您需要减少 RAID 阵列中的磁盘数量,在您的情况下,您从 2 个扩展到 4 个,现在您需要从 4 个缩减到 2 个,可以像这样完成

mdadm --grow --raid-devices=2 /dev/md3

修复后 RAID 阵列将如下所示:

[root@ovh-ds03 ~]# mdadm --detail /dev/md3
/dev/md3:
           Version : 1.2
     Creation Time : Sat Mar 30 18:18:26 2024
        Raid Level : raid1
        Array Size : 1951283200 (1860.89 GiB 1998.11 GB)
     Used Dev Size : 1951283200 (1860.89 GiB 1998.11 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Sep 14 19:33:15 2024
             State : active
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : md3
              UUID : 939ad077:07c22e9e:ae62fbf9:4df58cf9
            Events : 55484

    Number   Major   Minor   RaidDevice State
       2       8       35        0      active sync   /dev/sdc3
       3       8       51        1      active sync   /dev/sdd3
© www.soinside.com 2019 - 2024. All rights reserved.