정전 후 소프트웨어 RAID가 작동하지 않음


0

어제 서버 전원이 꺼졌습니다. 전원이 다시 켜지면 자동으로 부팅되지만 RAID가 작동하지 않습니다.

RAID 어레이는 mdadm으로 구축 된 소프트웨어 RAID6입니다. 원래이 어레이는 8 개의 드라이브를 사용했습니다. 몇 달 전에 나는 SMART 테스트에 실패한 드라이브를 발견하여 교체했습니다. 드라이브를 교체하는 동안 원래 이전 서버에 있었고 ZFS (FreeNAS RAIDZ의 일부)로 포맷 된 총 4 개의 드라이브를 추가하여 총 드라이브 수를 12로 늘 렸습니다.

불행히도 배열을 확장하는 데 사용한 정확한 명령을 기억하지 못하고 명령이 더 이상 내 bash_history에 없습니다. 나는 오늘날 mdadm이 8 개의 디스크를보고하고 있고 fdisk가이 4 개의 드라이브를 ZFS라고보고하고 있습니다. 지난 몇 개월 동안 12 디스크 RAID 설정을 실행했다고 확신합니다.

내가 아는 한, 어제까지 불량 드라이브를 교체하고 4 개의 추가 드라이브를 추가 한 후 서버가 종료되거나 재부팅되지 않았습니다.

나는 전력 손실 이후 실행 한 유일한 mdadm를 관련 명령은 mdadm --detail /dev/md0, mdadm --stop /dev/md0하고 mdadm --assemble --scan -v. 서버를 재부팅하지 않았습니다.

백업이 필요하다는 것을 알고 최대한 빨리 구현할 것입니다. 지금은 데이터를 복구 할 수있는 방법이 있기를 바라고 있습니다.

데비안 9와 mdadm 3.4를 실행하고 있습니다.

다른 정보를 제공해 드리겠습니다. (저는 자신을 RAID 또는 mdadm 전문가라고 생각하지 않습니다.)

업데이트 : "good"/ "bad"드라이브에 대한 mdadm 검사 출력을 추가했습니다. "좋은"드라이브는 raid6 및 12 드라이브를 올바르게보고합니다.

mdadm --detail / dev / md0

(참고 : 이것은 부팅하기 전에 실행 중 mdadm --stop /dev/md0입니다. 또한 raid0을보고합니다.)

/dev/md0:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 8
    Persistence : Superblock is persistent

          State : inactive

           Name : blackcanary:0  (local to host blackcanary)
           UUID : 7bd22c29:465d927f:d0c1d08f:ba3694c9
         Events : 72036

    Number   Major   Minor   RaidDevice

       -       8       64        -        /dev/sde
       -       8       32        -        /dev/sdc
       -       8        0        -        /dev/sda
       -       8      112        -        /dev/sdh
       -       8       80        -        /dev/sdf
       -       8       48        -        /dev/sdd
       -       8      128        -        /dev/sdi
       -       8       96        -        /dev/sdg

fdisk -l

(참고 : / dev / sdb 및 / dev / sdj는 어레이에 없어야하는 두 개의 드라이브입니다.)

Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdg: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdh: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdi: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdb: 57.9 GiB, 62109253632 bytes, 121307136 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 764C7B0B-574D-40DF-95C0-06D765AEB2D2

Device        Start       End  Sectors  Size Type
/dev/sdb1      2048   1050623  1048576  512M EFI System
/dev/sdb2   1050624  54255615 53204992 25.4G Linux filesystem
/dev/sdb3  54255616 121305087 67049472   32G Linux swap


Disk /dev/sdj: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdk: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 41DFFB5A-7F8C-11E5-9D78-7824AF43D5FA

Device       Start        End    Sectors  Size Type
/dev/sdk1      128    4194431    4194304    2G FreeBSD swap
/dev/sdk2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS


Disk /dev/sdl: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 40C2CEAE-7F8C-11E5-9D78-7824AF43D5FA

Device       Start        End    Sectors  Size Type
/dev/sdl1      128    4194431    4194304    2G FreeBSD swap
/dev/sdl2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS


Disk /dev/sdm: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 402A1394-7F8C-11E5-9D78-7824AF43D5FA

Device       Start        End    Sectors  Size Type
/dev/sdm1      128    4194431    4194304    2G FreeBSD swap
/dev/sdm2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS


Disk /dev/sdn: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 415081DC-7F8C-11E5-9D78-7824AF43D5FA

Device       Start        End    Sectors  Size Type
/dev/sdn1      128    4194431    4194304    2G FreeBSD swap
/dev/sdn2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS

블리드

/dev/sdd: UUID="7bd22c29-465d-927f-d0c1-d08fba3694c9" UUID_SUB="91d8e433-aeb9-a4f2-75ca-10cf3ceb0a83" LABEL="blackcanary:0" TYPE="linux_raid_member"
/dev/sde: UUID="7bd22c29-465d-927f-d0c1-d08fba3694c9" UUID_SUB="6ccd4cee-9fdd-872c-add0-209a9d074eb5" LABEL="blackcanary:0" TYPE="linux_raid_member"
/dev/sdc: UUID="7bd22c29-465d-927f-d0c1-d08fba3694c9" UUID_SUB="11bf8212-7c04-0c8c-a061-168bda9c34a5" LABEL="blackcanary:0" TYPE="linux_raid_member"
/dev/sda: UUID="7bd22c29-465d-927f-d0c1-d08fba3694c9" UUID_SUB="6cee76b0-031c-bb6a-6c13-7d15d0b2feee" LABEL="blackcanary:0" TYPE="linux_raid_member"
/dev/sdf: UUID="7bd22c29-465d-927f-d0c1-d08fba3694c9" UUID_SUB="508cd116-e0ac-131c-cd0d-3c5c41c1cbba" LABEL="blackcanary:0" TYPE="linux_raid_member"
/dev/sdg: UUID="7bd22c29-465d-927f-d0c1-d08fba3694c9" UUID_SUB="0080d32e-7fea-037f-75a8-6e40dfa8fdfa" LABEL="blackcanary:0" TYPE="linux_raid_member"
/dev/sdh: UUID="7bd22c29-465d-927f-d0c1-d08fba3694c9" UUID_SUB="f1aa862e-6c4c-0239-5fb4-29d7a9dd7497" LABEL="blackcanary:0" TYPE="linux_raid_member"
/dev/sdi: UUID="7bd22c29-465d-927f-d0c1-d08fba3694c9" UUID_SUB="32472d1f-25c1-dfb3-fa9a-57ab3df17986" LABEL="blackcanary:0" TYPE="linux_raid_member"
/dev/sdb1: UUID="E15C-D029" TYPE="vfat" PARTUUID="dff9fca6-2eef-4084-845c-d780ca7b6cb8"
/dev/sdb2: UUID="bfd3d30c-de34-4e00-89b4-384bcbb7922d" TYPE="ext4" PARTUUID="7696f182-551e-447c-9549-155605cc48a3"
/dev/sdb3: UUID="7f65b504-ffe2-48e4-868f-a9db95865505" TYPE="swap" PARTUUID="48ecdc7c-fb0a-404e-bc02-178063515b6c"
/dev/sdj: UUID="b59275af-0825-4f25-96d2-aff0c3fef5e5" TYPE="ext4"
/dev/sdk1: PARTUUID="41f3ae62-7f8c-11e5-9d78-7824af43d5fa"
/dev/sdk2: LABEL="main" UUID="2939752790805872810" UUID_SUB="11544886583175140825" TYPE="zfs_member" PARTUUID="420ad019-7f8c-11e5-9d78-7824af43d5fa"
/dev/sdl1: PARTUUID="40d655e5-7f8c-11e5-9d78-7824af43d5fa"
/dev/sdl2: LABEL="main" UUID="2939752790805872810" UUID_SUB="15587016790013084755" TYPE="zfs_member" PARTUUID="40ef0474-7f8c-11e5-9d78-7824af43d5fa"
/dev/sdm1: PARTUUID="403ea9c7-7f8c-11e5-9d78-7824af43d5fa"
/dev/sdm2: LABEL="main" UUID="2939752790805872810" UUID_SUB="12390459856885165202" TYPE="zfs_member" PARTUUID="4057cbaa-7f8c-11e5-9d78-7824af43d5fa"
/dev/sdn1: PARTUUID="416507e6-7f8c-11e5-9d78-7824af43d5fa"
/dev/sdn2: LABEL="main" UUID="2939752790805872810" UUID_SUB="16896032374271545514" TYPE="zfs_member" PARTUUID="417ef281-7f8c-11e5-9d78-7824af43d5fa"

mdadm --asemble --scan -v

mdadm: looking for devices for /dev/md0
mdadm: No super block found on /dev/sdn2 (Expected magic a92b4efc, got b1f5ae15)
mdadm: no RAID superblock on /dev/sdn2
mdadm: No super block found on /dev/sdn1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdn1
mdadm: No super block found on /dev/sdn (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdn
mdadm: No super block found on /dev/sdm2 (Expected magic a92b4efc, got fd87a884)
mdadm: no RAID superblock on /dev/sdm2
mdadm: No super block found on /dev/sdm1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdm1
mdadm: No super block found on /dev/sdm (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdm
mdadm: No super block found on /dev/sdl2 (Expected magic a92b4efc, got 946313cc)
mdadm: no RAID superblock on /dev/sdl2
mdadm: No super block found on /dev/sdl1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdl1
mdadm: No super block found on /dev/sdl (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdl
mdadm: No super block found on /dev/sdk2 (Expected magic a92b4efc, got 4de36623)
mdadm: no RAID superblock on /dev/sdk2
mdadm: No super block found on /dev/sdk1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdk1
mdadm: No super block found on /dev/sdk (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdk
mdadm: No super block found on /dev/sdj (Expected magic a92b4efc, got 0000043c)
mdadm: no RAID superblock on /dev/sdj
mdadm: No super block found on /dev/sdb3 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdb3
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc, got 00000405)
mdadm: no RAID superblock on /dev/sdb2
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdb1
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdb
mdadm: /dev/sdi is identified as a member of /dev/md0, slot 7.
mdadm: /dev/sdh is identified as a member of /dev/md0, slot 6.
mdadm: /dev/sdg is identified as a member of /dev/md0, slot 5.
mdadm: /dev/sdf is identified as a member of /dev/md0, slot 4.
mdadm: /dev/sda is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdc is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sde is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdd is identified as a member of /dev/md0, slot 1.
mdadm: added /dev/sdd to /dev/md0 as 1
mdadm: added /dev/sda to /dev/md0 as 2
mdadm: added /dev/sde to /dev/md0 as 3
mdadm: added /dev/sdf to /dev/md0 as 4
mdadm: added /dev/sdg to /dev/md0 as 5
mdadm: added /dev/sdh to /dev/md0 as 6
mdadm: added /dev/sdi to /dev/md0 as 7
mdadm: no uptodate device for slot 8 of /dev/md0
mdadm: no uptodate device for slot 9 of /dev/md0
mdadm: no uptodate device for slot 10 of /dev/md0
mdadm: no uptodate device for slot 11 of /dev/md0
mdadm: added /dev/sdc to /dev/md0 as 0
mdadm: /dev/md0 assembled from 8 drives - not enough to start the array.

mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This configuration was auto-generated on Sun, 25 Jun 2017 22:28:24 -0500 by mkconf
ARRAY /dev/md0 metadata=1.2 name=blackcanary:0 UUID=7bd22c29:465d927f:d0c1d08f:ba3694c9

mdadm --examine / dev / sdd

(참고 : "좋은"8 개 드라이브에 대해 매우 유사한 정보가 반환됩니다.)

/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 7bd22c29:465d927f:d0c1d08f:ba3694c9
           Name : blackcanary:0  (local to host blackcanary)
  Creation Time : Sun Jun 25 22:53:12 2017
     Raid Level : raid6
   Raid Devices : 12

 Avail Dev Size : 7813780144 (3725.90 GiB 4000.66 GB)
     Array Size : 39068897280 (37259.00 GiB 40006.55 GB)
  Used Dev Size : 7813779456 (3725.90 GiB 4000.66 GB)
    Data Offset : 257024 sectors
   Super Offset : 8 sectors
   Unused Space : before=256936 sectors, after=688 sectors
          State : clean
    Device UUID : 91d8e433:aeb9a4f2:75ca10cf:3ceb0a83

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Oct  7 14:23:36 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : b960e905 - correct
         Events : 72036

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)

mdadm --examine / dev / sdk

(참고 : "bad"4 드라이브에 대해 동일한 정보가 반환됩니다.)

/dev/sdk:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)

fdisk -b 512 -t gpt -l / dev / sdn

Disk /dev/sdn: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 415081DC-7F8C-11E5-9D78-7824AF43D5FA

Device       Start        End    Sectors  Size Type
/dev/sdn1      128    4194431    4194304    2G FreeBSD swap
/dev/sdn2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS

fdisk -b 4096 -t gpt -l / dev / sdn

Disk /dev/sdn: 3.7 TiB, 4000787030016 bytes, 976754646 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 415081DC-7F8C-11E5-9D78-7824AF43D5FA

Device       Start        End    Sectors  Size Type
/dev/sdn1      128    4194431    4194304   16G FreeBSD swap
/dev/sdn2  4194432 7814037127 7809842696 29.1T FreeBSD ZFS

예상 파티션이 있는지 sudo fdisk -b 512 -t gpt -l /dev/sdn또는 sudo fdisk -b 4096 -t gpt -l /dev/sdn표시 되는지 확인하십시오 .
grawity

두 게시물의 출력을 원래 게시물에 추가했습니다. 내 말에 약간의 혼란은 다른 드라이브가 파티션을 나열하지 않는다는 점을 고려할 때 sd [kn]에 대한 파티션을 기대하지 않았다는 것입니다.
Uze

답변:


0

많은 연구 끝에 명백한 데이터 손실없이 RAID 어레이를 복원 할 수있었습니다.

나는 궁극적으로 사용해야했다 mdadm --create --assume-clean. 올바른 구성을 찾을 때까지 다양한 구성을 비파괴 적으로 테스트 할 수 있도록 오버레이 파일 을 사용하기로 선택했습니다 .

나는 mdadm --examine /dev/sd*"좋은"드라이브를 사용하여 순서를 결정했습니다. 그런 다음 "나쁜"드라이브의 가능한 순열을 생성하고 마운트 가능한 파일 시스템이있을 때까지 그 드라이브를 살펴 보았습니다. 운 좋게도 24 가지 조합 만 가능했습니다.

나는 24 개의 순열을 모두 겪었지만 성공을 거두지 못했습니다. 매퍼 드라이브 ( mdadm --examine /dev/mapper/sd*) 중 하나를 검사 하여 원본과 비교하여 데이터 오프셋이 다른 것을 알았습니다. data-offset구성에 매개 변수를 추가하고 순열을 다시 테스트했으며 12 번째 시도 후에 성공했습니다.

다음은 테스트하는 동안 실행 한 명령입니다.

mdadm --stop /dev/md0
mdadm --create --assume-clean --run --level=6 --data-offset=128512 --raid-devices=12 /dev/md0 /dev/mapper/sdb /dev/mapper/sdc /dev/mapper/sda /dev/mapper/sdd /dev/mapper/sde /dev/mapper/sdf /dev/mapper/sdg /dev/mapper/sdh /dev/mapper/sdn /dev/mapper/sdm /dev/mapper/sdl /dev/mapper/sdk
dumpe2fs /dev/md0
fsck.ext4 -v /dev/md0

내가 사용하는 dd"나쁜"드라이브의 백업 MBR 및 파티션 테이블에, 다음이 미래 재부팅에 문제가 안되도록 "좋은"드라이브에 맞게 제로로 MBR 및 파티션 테이블을 덮어.

성공적으로 재 구축 한 후 새 mdadm.conf파일을 생성 하고 나중에 필요할 경우를 대비하여 가능한 많은 데이터를 배열에 기록했습니다. 이제 진정한 백업 솔루션을 찾고 있습니다.


... 또한 구성이 유지되는지 확인하기 위해 재부팅 했습니까?
Attie
당사 사이트를 사용함과 동시에 당사의 쿠키 정책개인정보 보호정책을 읽고 이해하였음을 인정하는 것으로 간주합니다.
Licensed under cc by-sa 3.0 with attribution required.