The Raspberry Pi is great, but by default it needs usb for power and utp for network.
Luckily there is the Smart Power Base to provide power and the Netgear (I use the wna1100 for obvious reasons) for wireless, both making the Pi a lot more independent.
2013-08-14
2013-08-04
Recovering lvm2 on degraded mdadm raid 1
An mdadm raid 1 had one disk crash, the other was moved to a new computer (/dev/sdb2).
Step 1: Does fdisk see the disk: yes
Step 2: Gently look with fsck, yes it is part of mdadm
Step 3: Let's see what mdadm says
Step 4: Let's fix the md device
Step 5: Gently look with mount and lvs/pvs
Step 6: Fix the lvm2 device
Step 1: Does fdisk see the disk: yes
root@debian6~# fdisk -l /dev/sdb Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00060f19 Device Boot Start End Blocks Id System /dev/sdb1 1 254 2040254+ 83 Linux /dev/sdb2 255 121602 974722329 83 Linux
Step 2: Gently look with fsck, yes it is part of mdadm
root@debian6~# fsck -n /dev/sdb2 fsck from util-linux-ng 2.17.2 fsck: fsck.linux_raid_member: not found fsck: Error 2 while executing fsck.linux_raid_member for /dev/sdb2
Step 3: Let's see what mdadm says
root@debian6~# mdadm --examine /dev/sdb2 /dev/sdb2: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : e8221214:354aaaf1:e9e15d78:075bfc18 Name : storage:1 Creation Time : Thu May 6 20:00:16 2010 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1949444384 (929.57 GiB 998.12 GB) Array Size : 1949444384 (929.57 GiB 998.12 GB) Super Offset : 1949444640 sectors State : clean Device UUID : 0c265a25:cc3f2ddd:ae7a1511:f447c74e Update Time : Tue May 1 23:09:11 2012 Checksum : 498b9bd4 - correct Events : 33039098 Device Role : Active device 0 Array State : A. ('A' == active, '.' == missing)
Step 4: Let's fix the md device
root@debian6~# mdadm --assemble --scan mdadm: /dev/md/storage:1 has been started with 1 drive (out of 2). mdadm: /dev/md/0_0 has been started with 1 drive (out of 2). root@debian6~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md126 : active (auto-read-only) raid1 sdb1[0] 2040128 blocks [2/1] [U_] md127 : active (auto-read-only) raid1 sdb2[0] 974722192 blocks super 1.0 [2/1] [U_] unused devices:
root@debian6~# fdisk -l .... Disk /dev/md127: 998.1 GB, 998115524608 bytes 2 heads, 4 sectors/track, 243680548 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md127 doesn't contain a valid partition table
Step 5: Gently look with mount and lvs/pvs
root@debian6~# mount /dev/md127 /mnt mount: unknown filesystem type 'LVM2_member' root@debian6~# lvs No volume groups found root@debian6~# pvs
Step 6: Fix the lvm2 device
root@debian6~# vgscan Reading all physical volumes. This may take a while... Found volume group "md1_vg" using metadata type lvm2 root@debian6~# vgs VG #PV #LV #SN Attr VSize VFree md1_vg 1 1 0 wz--n- 929.57g 0 root@debian6~# pvs PV VG Fmt Attr PSize PFree /dev/md127 md1_vg lvm2 a- 929.57g 0 root@debian6~# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert md1vol1 md1_vg -wi--- 929.57g
Step 7: The lvm2 device is not activated, yet
root@debian6~# lvdisplay --- Logical volume --- LV Name /dev/md1_vg/md1vol1 VG Name md1_vg LV UUID qSpslc-nO70-8jjX-5lBl-ge20-fFdR-dSEnWk LV Write Access read/write LV Status NOT available LV Size 929.57 GiB Current LE 475938 Segments 1 Allocation inherit Read ahead sectors auto
Step 8: Activate lvm2 device and mount
root@debian6~# vgchange -a y 1 logical volume(s) in volume group "md1_vg" now active root@debian6~# mount /dev/md1_vg/md1vol1 /mnt root@debian6~# mount | grep mnt /dev/mapper/md1_vg-md1vol1 on /mnt type xfs (rw)
Subscribe to:
Posts (Atom)