微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Ubuntu 16.04 – 冻结的mdadm数组

我有一个工作的RAID5阵列,由6个4TB磁盘组成.
Smartd报告其中一个磁盘启动失败.
我决定在一次操作中做几件事:
1)删除故障磁盘
2)添加新的替换它
3)向阵列中添加几个磁盘并进行扩展

由于我只有较小的磁盘(3)我使用LVM加入大于4TB的卷中的较小磁盘

这是我跑的顺序:

1) vgcreate vg_sdi_sdj /dev/sdi1 /dev/sdj1
2) vgcreate vg_sdj_sdl /dev/sdk1 /dev/sdl1
3) lvcreate -l 100%FREE -n all vg_sdi_sdj
4) lvcreate -l 100%FREE -n all vg_sdk_sdl
5) mdadm --manage /dev/md1 --add /dev/sdg1
6) mdadm --manage /dev/md1 --add /dev/vg_sdi_sdj/all
7) mdadm --manage /dev/md1 --add /dev/vg_sdk_sdl/all
8) mdadm --manage /dev/md1 --fail /dev/sdc1
9) mdadm --grow --raid-devices=8 --backup-file=/home/andrei/grow_md1.bak /dev/md1

起初一切都很顺利.数组开始重建.唯一奇怪的是没有创建备份文件.我之前在跑步

watch -n 1 mdadm --detail /dev/md1
nmon

在背景中要注意事物.当重建正在进行时,我可以访问该阵列.

但是,在该过程中有9%的情况下,除了100%读取/ dev / sdb和/ dev / sdb1之外,阵列上的所有I / O都已停止.一旦我杀死了手表-n 1 mdadm,那也停了下来.

这是mdadm最近的输出–detail:

06002

我无法在阵列上做任何I / O.运行htop显示一个cpu核心100%执行I / O操作.

我重新启动了机器.数组没有重新组装.我通过运行手动重新组装它:

mdadm --assemble /dev/md1 --force /dev/sdb1 /dev/sdd1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/vg_sdi_sdj/all /dev/vg_sdk_sdl/all

(重启磁盘后更改名称).然而,lvm正确地找到了卷和组并将它们提升了.

没有力量就不会打球.它确实组装并显示了上面引用的–detail报告.

但是它仍然不允许任何I / O所以mount命令冻结(我在那里单个磁盘LVM和ext4文件系统). htop还显示一个与I / O挂钩的cpu核心.

但是没有磁盘活动指示灯亮起.

目前我遇到了一个包含大量数据的非功能性数组.理想情况下,我想检索数据.

也许使用LVM逻辑卷作为mdadm“磁盘”是一个错误.虽然我没有找到任何表明它不起作用的信息.

我真的很感激有关如何恢复我的阵列的任何建议和指示.

仔细看看journalctl -xe透露如下:

Jan 15 22:41:15 server sudo[1612]:   andrei : TTY=tty1 ; PWD=/home/andrei ; USER=root ; COMMAND=/sbin/mdadm --assemble /dev/md1 --force /dev/sdb1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/vg_sdi_sdj/all /dev/vg_sdk_sdl/all
Jan 15 22:41:15 server sudo[1612]: pam_unix(sudo:session): session opened for user root by andrei(uid=0)
Jan 15 22:41:15 server kernel: md: md1 stopped.
Jan 15 22:41:15 server kernel: md: bind<dm-1>
Jan 15 22:41:15 server kernel: md: bind<sdd1>
Jan 15 22:41:15 server kernel: md: bind<sdg1>
Jan 15 22:41:15 server kernel: md: bind<sdh1>
Jan 15 22:41:15 server kernel: md: bind<sdf1>
Jan 15 22:41:15 server kernel: md: bind<dm-0>
Jan 15 22:41:15 server kernel: md: bind<sde1>
Jan 15 22:41:15 server kernel: md: bind<sdb1>
Jan 15 22:41:15 server mdadm[879]: NewArray event detected on md device /dev/md1
Jan 15 22:41:15 server mdadm[879]: DegradedArray event detected on md device /dev/md1
Jan 15 22:41:15 server kernel: md/raid:md1: reshape will continue
Jan 15 22:41:15 server kernel: md/raid:md1: device sdb1 operational as raid disk 0
Jan 15 22:41:15 server kernel: md/raid:md1: device sde1 operational as raid disk 7
Jan 15 22:41:15 server kernel: md/raid:md1: device dm-0 operational as raid disk 6
Jan 15 22:41:15 server kernel: md/raid:md1: device sdf1 operational as raid disk 5
Jan 15 22:41:15 server kernel: md/raid:md1: device sdh1 operational as raid disk 4
Jan 15 22:41:15 server kernel: md/raid:md1: device sdg1 operational as raid disk 3
Jan 15 22:41:15 server kernel: md/raid:md1: device sdd1 operational as raid disk 2
Jan 15 22:41:15 server kernel: md/raid:md1: allocated 8606kB
Jan 15 22:41:15 server kernel: md/raid:md1: raid level 5 active with 7 out of 8 devices,algorithm 2
Jan 15 22:41:15 server kernel: RAID conf printout:
Jan 15 22:41:15 server kernel:  --- level:5 rd:8 wd:7
Jan 15 22:41:15 server kernel:  disk 0,o:1,dev:sdb1
Jan 15 22:41:15 server kernel:  disk 1,dev:dm-1
Jan 15 22:41:15 server kernel:  disk 2,dev:sdd1
Jan 15 22:41:15 server kernel:  disk 3,dev:sdg1
Jan 15 22:41:15 server kernel:  disk 4,dev:sdh1
Jan 15 22:41:15 server kernel:  disk 5,dev:sdf1
Jan 15 22:41:15 server kernel:  disk 6,dev:dm-0
Jan 15 22:41:15 server kernel:  disk 7,dev:sde1
Jan 15 22:41:15 server kernel: created bitmap (30 pages) for device md1
Jan 15 22:41:15 server kernel: md1: bitmap initialized from disk: read 2 pages,set 7 of 59615 bits
Jan 15 22:41:16 server kernel: md1: detected capacity change from 0 to 20003257057280
Jan 15 22:41:16 server kernel: md: reshape of RAID array md1
Jan 15 22:41:16 server kernel: md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Jan 15 22:41:16 server kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reshape.
Jan 15 22:41:16 server kernel: md: using 128k window,over a total of 3906886144k.
Jan 15 22:41:16 server mdadm[879]: RebuildStarted event detected on md device /dev/md1
Jan 15 22:41:16 server sudo[1612]: pam_unix(sudo:session): session closed for user root
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589312 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589320 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589328 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589336 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589344 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589352 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589360 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589368 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589376 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759582288 on sdf1)
...
Jan 15 22:43:36 server kernel: INFO: task md1_reshape:1637 blocked for more than 120 seconds.
Jan 15 22:43:36 server kernel:       Not tainted 4.4.0-59-generic #80-Ubuntu
Jan 15 22:43:36 server kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 15 22:43:36 server kernel: md1_reshape     D ffff88021028bb68     0  1637      2 0x00000000
Jan 15 22:43:36 server kernel:  ffff88021028bb68 ffff88021028bb80 ffffffff81e11500 ffff88020f5e8e00
Jan 15 22:43:36 server kernel:  ffff88021028c000 ffff8800c6993288 ffff88021028bbe8 ffff88021028bd14
Jan 15 22:43:36 server kernel:  ffff8800c6993000 ffff88021028bb80 ffffffff818343f5 ffff8802144c7000
Jan 15 22:43:36 server kernel: Call Trace:
Jan 15 22:43:36 server kernel:  [<ffffffff818343f5>] schedule+0x35/0x80
Jan 15 22:43:36 server kernel:  [<ffffffffc01d2fec>] reshape_request+0x7fc/0x950 [raid456]
Jan 15 22:43:36 server kernel:  [<ffffffff810c4240>] ? wake_atomic_t_function+0x60/0x60
Jan 15 22:43:36 server kernel:  [<ffffffffc01d346b>] sync_request+0x32b/0x3b0 [raid456]
Jan 15 22:43:36 server kernel:  [<ffffffff81833d46>] ? __schedule+0x3b6/0xa30
Jan 15 22:43:36 server kernel:  [<ffffffff8140c305>] ? find_next_bit+0x15/0x20
Jan 15 22:43:36 server kernel:  [<ffffffff81704c5c>] ? is_mddev_idle+0x9c/0xfa
Jan 15 22:43:36 server kernel:  [<ffffffff816a20fc>] md_do_sync+0x89c/0xe60
Jan 15 22:43:36 server kernel:  [<ffffffff810c4240>] ? wake_atomic_t_function+0x60/0x60
Jan 15 22:43:36 server kernel:  [<ffffffff8169e689>] md_thread+0x139/0x150
Jan 15 22:43:36 server kernel:  [<ffffffff810c4240>] ? wake_atomic_t_function+0x60/0x60
Jan 15 22:43:36 server kernel:  [<ffffffff8169e550>] ? find_pers+0x70/0x70
Jan 15 22:43:36 server kernel:  [<ffffffff810a0c08>] kthread+0xd8/0xf0
Jan 15 22:43:36 server kernel:  [<ffffffff810a0b30>] ? kthread_create_on_node+0x1e0/0x1e0
Jan 15 22:43:36 server kernel:  [<ffffffff8183888f>] ret_from_fork+0x3f/0x70
Jan 15 22:43:36 server kernel:  [<ffffffff810a0b30>] ? kthread_create_on_node+0x1e0/0x1e0
使用LVM确实是一个错误.它不仅为其创建者之外的任何人创建了不必要的复杂存储堆栈,MD阵列是在LVM阵列之前构建的,需要您在充当MD成员的LV上手动调用MD扫描.

此外,避免在持久性配置(例如sda,sdb等)中使用内核设备名称.这在命名卷组时尤其重要,因为VG抽象出底层存储并且可以在PV之间自由移动.内核设备名称也不被认为是永久性的,并且可以出于各种原因随时更改.这对于LVM PV来说不是问题(因为它们是批量磁盘扫描的一部分并且几乎可以拾取任何东西),但是在您创建的情况下,您的VG名称很快就无法反映现实.

我建议您尝试从MD阵列中优雅地删除LV,并将其恢复到降级(但是理智)的状态.请注意,LVM之上的MD并不是人们在bug崩溃时所关心的.你处于未知领域,你期望工作的事情可能会因为没有明显的原因而失败.

如果这些数据很关键并且没有备份,那么您希望推迟到现场知道LVM和MD真正非常好的人.我假设你没有这个,因为你在这里问,所以如果你需要的话,让我们进行一次对话.如果你必须走那条路,我会用任何有趣的细节更新这个.现在,只需尝试通过替换成员的普通旧磁盘的LVM混乱来进行后退.

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐