FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Fedora User

 
 
LinkBack Thread Tools
 
Old 02-23-2012, 02:17 PM
"Jeffrey Ross"
 
Default degraded array at reboot

system is running Fedora 16 with RAID 1

upon a reboot some but not all of the partitions come up as degraded and
its always the same partitions on the same disk (/dev/sda) /dev/md2,
/dev/md6, and /dev/md7 (/usr, /boot, & /home respectively, /, /var, and
swap mount with no issue)

config files are:

$ cat /etc/mdadm.conf
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=f7d27973:c1e3562c:c97c2b84:778f9f47
ARRAY /dev/md1 level=raid1 num-devices=2
UUID=e36e8193:d486ae1b:d9d2d364:ba744b1a
ARRAY /dev/md2 level=raid1 num-devices=2
UUID=0bd93f75:e97de149:0c512d92:67476c03
ARRAY /dev/md3 level=raid1 num-devices=2
UUID=a3dcd591:ad258de3:429e09ef:aee74491
ARRAY /dev/md6 level=raid1 num-devices=2
UUID=7ef1e8e1:d8d1efd9:bfe78010:bc810f04
ARRAY /dev/md7 level=raid1 num-devices=2
UUID=e496a7cf:f99735d8:4d63b4bd:567000c5

$ sudo sfdisk -l /dev/sda

Disk /dev/sda: 20023 cylinders, 255 heads, 63 sectors/track
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sda1 0+ 1032- 1033- 8290304 fd Linux raid autodetect
/dev/sda2 1032+ 1988- 957- 7680000 fd Linux raid autodetect
/dev/sda3 1988+ 2944- 957- 7680000 fd Linux raid autodetect
/dev/sda4 2944+ 20023- 17079- 137184256 5 Extended
/dev/sda5 2944+ 3900- 957- 7680000 fd Linux raid autodetect
/dev/sda6 * 3900+ 4002- 102- 819200 fd Linux raid autodetect
/dev/sda7 4002+ 20023- 16021- 128681984 fd Linux raid autodetect
$ sudo sfdisk -l /dev/sdb

Disk /dev/sdb: 20023 cylinders, 255 heads, 63 sectors/track
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdb1 0+ 1032- 1033- 8290304 fd Linux raid autodetect
/dev/sdb2 1032+ 1988- 957- 7680000 fd Linux raid autodetect
/dev/sdb3 1988+ 2944- 957- 7680000 fd Linux raid autodetect
/dev/sdb4 2944+ 20023- 17079- 137184256 5 Extended
/dev/sdb5 2944+ 3900- 957- 7680000 fd Linux raid autodetect
/dev/sdb6 * 3900+ 4002- 102- 819200 fd Linux raid autodetect
/dev/sdb7 4002+ 20023- 16021- 128681984 fd Linux raid autodetect



/dev/sd[ab]1 = /dev/md1
/dev/sd[ab]2 = /dev/md2
etc
except /dev/sd[ab]5 = /dev/md0

In the past I've tried a simple add of the partition, other times I've
tried --zero-superblock but each time after a reboot the problem comes
back.

suggestions?

Thanks, Jeff



--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org
 
Old 02-23-2012, 02:28 PM
Bruno Wolff III
 
Default degraded array at reboot

On Thu, Feb 23, 2012 at 10:17:00 -0500,
Jeffrey Ross <jeff@bubble.org> wrote:
> system is running Fedora 16 with RAID 1
>
> upon a reboot some but not all of the partitions come up as degraded and
> its always the same partitions on the same disk (/dev/sda) /dev/md2,
> /dev/md6, and /dev/md7 (/usr, /boot, & /home respectively, /, /var, and
> swap mount with no issue)
>
> config files are:
>
> $ cat /etc/mdadm.conf
> # mdadm.conf written out by anaconda
> MAILADDR root
> AUTO +imsm +1.x -all
> ARRAY /dev/md0 level=raid1 num-devices=2
> UUID=f7d27973:c1e3562c:c97c2b84:778f9f47
> ARRAY /dev/md1 level=raid1 num-devices=2
> UUID=e36e8193:d486ae1b:d9d2d364:ba744b1a
> ARRAY /dev/md2 level=raid1 num-devices=2
> UUID=0bd93f75:e97de149:0c512d92:67476c03
> ARRAY /dev/md3 level=raid1 num-devices=2
> UUID=a3dcd591:ad258de3:429e09ef:aee74491
> ARRAY /dev/md6 level=raid1 num-devices=2
> UUID=7ef1e8e1:d8d1efd9:bfe78010:bc810f04
> ARRAY /dev/md7 level=raid1 num-devices=2
> UUID=e496a7cf:f99735d8:4d63b4bd:567000c5

Have you double checked those UIDs against the arrays? My first guess would
be that there is a mismatch for the problem arrays.
--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org
 
Old 02-23-2012, 10:17 PM
Sam Varshavchik
 
Default degraded array at reboot

Jeffrey Ross writes:


system is running Fedora 16 with RAID 1

upon a reboot some but not all of the partitions come up as degraded and


This appears to be a recurring bug, that's yet to be identified. This
happens sometimes if you do not have all RAID UUIDs enumerated on the kernel
boot command line.


Add any missing raid UUIDs to /etc/default/grub, there should be a
"rd.md.uuid=UUID" for each one of your RAID (not partition) UUIDs. Rerun
grub2-mkconfig to update your grub.cfg.


--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org
 
Old 02-24-2012, 10:46 AM
Jeffrey Ross
 
Default degraded array at reboot

On 02/23/2012 06:17 PM, Sam Varshavchik wrote:
Jeffrey Ross writes:




system is running Fedora 16 with RAID 1




upon a reboot some but not all of the partitions come up as
degraded and





This appears to be a recurring bug, that's yet to be identified.
This happens sometimes if you do not have all RAID UUIDs
enumerated on the kernel boot command line.




Add any missing raid UUIDs to /etc/default/grub, there should be a
"rd.md.uuid=UUID" for each one of your RAID (not partition) UUIDs.
Rerun grub2-mkconfig to update your grub.cfg.










I'm using grub not grub2 I'm guessing the procedure is similar,
currently I have several entries for the RAID UUIDs but not all:

(it was one long line, I broke it up for readability)



kernel /vmlinuz-3.2.7-1.fc16.x86_64 ro

root=UUID=70ef146a-ba51-498a-9923-8500736d4f1f

rd_MD_UUID=f7d27973:c1e3562c:c97c2b84:778f9f47

rd_MD_UUID=e36e8193:d486ae1b:d9d2d364:ba744b1a

rd_NO_LUKS rd_NO_LVM rd_NO_DM LANG=en_US.UTF-8

SYSFONT=latarcyrheb-sun16 KEYTABLE=us



This makes sense as only 3 of the 5 partitions were "healthy"* I'll
continue with the same format for the other two and re-run
"grub-install"



Thanks, for the pointer



Jeff









--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org
 

Thread Tools




All times are GMT. The time now is 07:18 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org