FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Ubuntu > Ubuntu User

 
 
LinkBack Thread Tools
 
Old 12-03-2007, 07:04 PM
"ben.div"
 
Default Boot with a degraded raid 5

Hi all

I have setup 2 software raids (5 and 0) with 3 hard disks (120, 160, 250
Go) on my gutsy box. The raid 5 array (md0) contains the root system,
and the raid 0 array (md1) is mounted as a storage (unused) partition.
The /boot partition is a normal ext3 partition, present on each disk
(duplicated manualy, for the moment).

I'm trying to let this setup boot, even if the raid 5 array (md0) is
degraded (ie one disk fails). When all hd are present, the system boots.
But if I try with only 2 disks, initramfs loads well, but md0 is never
mounted so the system doesn't find / and stop loading.
When the boot fails, the system gives me the hand in initramfs console.
Here, I can run my md array with this command :

# mdadm --assemble --scan --run

The "--run" option tell mdadm to start array, even in degraded mode.

So here, I suspected that the wrong option was passed to mdadm in
initramfs, and tell it to not to run a degraded array.
I've found (with grep on initrams content) that the file
/etc/udev/rules.d/85-mdadm.rules contains this line :

SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="linux_raid*",
RUN+="watershed /sbin/mdadm --assemble --scan --no-degraded"

I guess it's the boot parameter for mdadm ! So, I changed it, made a new
initramfs, reboot with only 2 disks and... nothing more, it doesn't
start anymore :/

So, after this long story (sorry), my questions :

Do you think I'm totaly lost, or editing this file is the good way ?
Is there a good reason why ubuntu's dev chose this "--no-degraded"
option for mdadm by default ?
What can I do more ??

Thank's for reading !

Ben



--
ubuntu-users mailing list
ubuntu-users@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
 
Old 12-04-2007, 09:38 AM
"Ben Ben"
 
Default Boot with a degraded raid 5

Hi all

I have setup 2 software raids (5 and 0) with 3 hard disks (120, 160, 250
Go) on my gutsy box. The raid 5 array (md0) contains the root system,
and the raid 0 array (md1) is mounted as a storage (unused) partition.
The /boot partition is a normal ext3 partition, present on each disk
(duplicated manually, for the moment).

I'm trying to let this setup boot, even if the raid 5 array (md0) is
degraded (ie one disk fails). When all hd are present, the system boots.
But if I try with only 2 disks, initramfs loads well, but md0 is never
mounted so the system doesn't find / and stop loading.
When the boot fails, the system gives me the hand in initramfs console.
Here, I can run my md array with this command :

# mdadm --assemble --scan --run

The "--run" option tell mdadm to start array, even in degraded mode.

So here, I suspected that the wrong option was passed to mdadm in
initramfs, and tell it to not to run a degraded array.
I've found (with grep on initrams content) that the file
/etc/udev/rules.d/85-mdadm.rules contains this line :

SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="linux_raid*",
RUN+="watershed /sbin/mdadm --assemble --scan --no-degraded"

I guess it's the boot parameter for mdadm ! So, I changed it, made a new
initramfs, reboot with only 2 disks and... nothing more, it doesn't
start anymore :/

So, after this long story (sorry), my questions :

Do you think I'm totally lost, or editing this file is the good way ?
Is there a good reason why ubuntu's dev chose this "--no-degraded"
option for mdadm by default ?
What can I do more ??

Thank's for reading !

Ben

--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
 
Old 12-04-2007, 05:16 PM
"ben.div"
 
Default Boot with a degraded raid 5

Woh ! Absolutely nobody can help me on this question ? I've already
asked about this on 4-5 lists or forums, and I've cumulated : 0 answer.
Where could I find help on this subject ? The kernel team ? Who has
developped this part (boot on initramfs and device detection) ?

I'm stucked on that problem since 2 weeks. Please, help

Ben

Ben Ben a écrit :
> Hi all
>
> I have setup 2 software raids (5 and 0) with 3 hard disks (120, 160, 250
> Go) on my gutsy box. The raid 5 array (md0) contains the root system,
> and the raid 0 array (md1) is mounted as a storage (unused) partition.
> The /boot partition is a normal ext3 partition, present on each disk
> (duplicated manually, for the moment).
>
> I'm trying to let this setup boot, even if the raid 5 array (md0) is
> degraded (ie one disk fails). When all hd are present, the system boots.
> But if I try with only 2 disks, initramfs loads well, but md0 is never
> mounted so the system doesn't find / and stop loading.
> When the boot fails, the system gives me the hand in initramfs console.
> Here, I can run my md array with this command :
>
> # mdadm --assemble --scan --run
>
> The "--run" option tell mdadm to start array, even in degraded mode.
>
> So here, I suspected that the wrong option was passed to mdadm in
> initramfs, and tell it to not to run a degraded array.
> I've found (with grep on initrams content) that the file
> /etc/udev/rules.d/85-mdadm.rules contains this line :
>
> SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="linux_raid*",
> RUN+="watershed /sbin/mdadm --assemble --scan --no-degraded"
>
> I guess it's the boot parameter for mdadm ! So, I changed it, made a new
> initramfs, reboot with only 2 disks and... nothing more, it doesn't
> start anymore :/
>
> So, after this long story (sorry), my questions :
>
> Do you think I'm totally lost, or editing this file is the good way ?
> Is there a good reason why ubuntu's dev chose this "--no-degraded"
> option for mdadm by default ?
> What can I do more ??
>
> Thank's for reading !
>
> Ben
>

--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
 
Old 12-04-2007, 07:47 PM
Phillip Susi
 
Default Boot with a degraded raid 5

ben.div wrote:
> Woh ! Absolutely nobody can help me on this question ? I've already
> asked about this on 4-5 lists or forums, and I've cumulated : 0 answer.
> Where could I find help on this subject ? The kernel team ? Who has
> developped this part (boot on initramfs and device detection) ?
>
> I'm stucked on that problem since 2 weeks. Please, help

Known issue... though I can't seem to find the bug # now.

> Ben Ben a écrit :
>> So here, I suspected that the wrong option was passed to mdadm in
>> initramfs, and tell it to not to run a degraded array.
>> I've found (with grep on initrams content) that the file
>> /etc/udev/rules.d/85-mdadm.rules contains this line :
>>
>> SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="linux_raid*",
>> RUN+="watershed /sbin/mdadm --assemble --scan --no-degraded"
>>
>> I guess it's the boot parameter for mdadm ! So, I changed it, made a new
>> initramfs, reboot with only 2 disks and... nothing more, it doesn't
>> start anymore :/

Not sure what's going wrong without any description other than "it
doesn't start anymore", but that should allow you to boot in a degraded
array.

>> So, after this long story (sorry), my questions :
>>
>> Do you think I'm totally lost, or editing this file is the good way ?
>> Is there a good reason why ubuntu's dev chose this "--no-degraded"
>> option for mdadm by default ?
>> What can I do more ??

The reason is because we don't want to degrade an array just because one
of the disks has not been detected yet. The proper solution is to wait
for either a timeout or manual intervention to go ahead and mount the
array degraded.



--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
 
Old 12-05-2007, 10:22 AM
"Ben Ben"
 
Default Boot with a degraded raid 5

Hi Philip. Thank you so much for taking time to answer me, I feel less
alone now

2007/12/4, Phillip Susi <psusi@cfl.rr.com>:
> ben.div wrote:
> > Woh ! Absolutely nobody can help me on this question ? I've already
> > asked about this on 4-5 lists or forums, and I've cumulated : 0 answer.
> > Where could I find help on this subject ? The kernel team ? Who has
> > developped this part (boot on initramfs and device detection) ?
> >
> > I'm stucked on that problem since 2 weeks. Please, help
>
> Known issue... though I can't seem to find the bug # now.
>
What is a known issue ? Running a degraded raid 5 at boot is not possible ?

> > Ben Ben a écrit :
> >> So here, I suspected that the wrong option was passed to mdadm in
> >> initramfs, and tell it to not to run a degraded array.
> >> I've found (with grep on initrams content) that the file
> >> /etc/udev/rules.d/85-mdadm.rules contains this line :
> >>
> >> SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="linux_raid*",
> >> RUN+="watershed /sbin/mdadm --assemble --scan --no-degraded"
> >>
> >> I guess it's the boot parameter for mdadm ! So, I changed it, made a new
> >> initramfs, reboot with only 2 disks and... nothing more, it doesn't
> >> start anymore :/
>
> Not sure what's going wrong without any description other than "it
> doesn't start anymore", but that should allow you to boot in a degraded
> array.
>
It does the same thing I describe before : initramfs loads, md driver
try to run array but can't, so it hangs around 3 minutes and give me
the hand in the initramfs console.
Here's outputs of cat /proc/mdstat at this stade. I gave many try
(reboot), as the output is not always the same :

$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4] [linear] [multipath]
[raid1] [raid10]
md1 : active raid0 sda5[0] sdb5[1]
78164032 blocks 64k chunks

md0 : inactive hda3[0]
116141824 blocks

unused devices: <none>

$ cat /proc/mdstat # with kernel 2.6.22.14 md drivers compiled inside
kernel (not as module)
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5]
[raid4] [multipath] [faulty]
md1 : inactive sdb5[1]
39102080 blocks

md0 : inactive hda3[0]
116141824 blocks

unused devices: <none>

$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4] [linear] [multipath]
[raid1] [raid10]
md1 : active raid0 sda5[0] sdb5[1]
78164032 blocks 64k chunks

md0 : inactive hda3[0]
116141824 blocks

unused devices: <none>

$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4] [linear] [multipath]
[raid1] [raid10]
md1 : inactive sdb5[1]
39102080 blocks

md0 : inactive hda3[0]
116141824 blocks

unused devices: <none>

All these test have been done with all 3 disks up, and with --run
option for mdadm in /etc/udev/rules.d/85-mdadm.rules.
You can see that md0 is never ran, and md1 (raid0) is sometime ran,
sometimes not.
Here, if I stop and launch manually the md, it works.

If I give the option --no-degraded to mdadm, it work always (if the
array is not degraded).

> >> So, after this long story (sorry), my questions :
> >>
> >> Do you think I'm totally lost, or editing this file is the good way ?
> >> Is there a good reason why ubuntu's dev chose this "--no-degraded"
> >> option for mdadm by default ?
> >> What can I do more ??
>
> The reason is because we don't want to degrade an array just because one
> of the disks has not been detected yet. The proper solution is to wait
> for either a timeout or manual intervention to go ahead and mount the
> array degraded.
>

Why does the --run option never work, but --no-degraded work, even if
the raid array is not degraded ?

It seems like you suggest to be a "disk not detected yet" problem. How
could I workaround this ? Maybe a "sleep 10" before launching mdadm ?
But why are the disk detected for md1 (raid1), but not for md0
(raid0), while it use the same devices (sda, sdb for md1, hda, sda,
sdb for md0) ?

Maybe it's a udev syntax problem (I didn't take time to study it) ?

Another question : you said I can run the array manually and launch
back the boot process. How can I do this last point ? running /init ?

Thank you !

Ben

--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
 
Old 12-05-2007, 02:29 PM
Phillip Susi
 
Default Boot with a degraded raid 5

Ben Ben wrote:
> What is a known issue ? Running a degraded raid 5 at boot is not possible ?

Known issue that by default, the system will not try to activate the
array in a degraded state.

> All these test have been done with all 3 disks up, and with --run
> option for mdadm in /etc/udev/rules.d/85-mdadm.rules.
> You can see that md0 is never ran, and md1 (raid0) is sometime ran,
> sometimes not.
> Here, if I stop and launch manually the md, it works.
>
> If I give the option --no-degraded to mdadm, it work always (if the
> array is not degraded).

Simply removing the --no-degraded option should do the trick.

> Another question : you said I can run the array manually and launch
> back the boot process. How can I do this last point ? running /init ?

IIRC, simply exiting from the busybox shell will cause the boot process
to attempt to continue, so after you manually assemble the array, just exit.


--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
 

Thread Tools




All times are GMT. The time now is 03:55 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright ©2007 - 2008, www.linux-archive.org