FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Debian > Debian Kernel

 
 
LinkBack Thread Tools
 
Old 06-01-2010, 11:49 PM
Paul Menzel
 
Default Bug#583917: mdadm: long delay (6–200 minutes) during boot (root device detection) after upgrade on RAID/LVM/LUKS setup

retitle 583917 mdadm: long delay (6–200 minutes) during boot (root device detection) after upgrade on RAID/LVM/LUKS setup
reassign 583917 mdadm
version 583917 3.1.2-2
notfound 583917 3.1.1-1
quit

[ I took pkg-lvm-maintainers@lists.alioth.debian.org and Michael off the
receiver list. ]

Am Dienstag, den 01.06.2010, 17:06 +0200 schrieb Agustin Martin:
> On Tue, Jun 01, 2010 at 04:50:56PM +0200, Agustin Martin wrote:
> > On Tue, Jun 01, 2010 at 01:32:29PM +0200, martin f krafft wrote:
> > > also sprach Paul Menzel <pm.debian@googlemail.com> [2010.06.01.1127 +0200]:
> > > > Dear Debian mdadm maintainers and Debian LVM Team,
> > > >
> > > > could you please comment on the first issue if it is related to your
> > > > packages.
> > >
> > > I don't see a way in which mdadm could be responsible for this. Have
> > > you tried downgrading it to see if the error persists?

> > Same problem here since around last Friday. Since many things related to
> > boot were changed around that time (mdadm, initramfs-tools and cryptsetup,
> > which still FTBFS in most arches) I was waiting to have cryptsetup upgraded
> > before further checking, but this seems indeed to be mdadm problem.
> >
> > Downgrading mdadm from 3.1.2-2 to 3.0.3-2 fixes the problem.

Downgrading mdadm to 3.1.1-1 fixed this issue for me too. At least for
the last reboot, but I do not see the
`/sys/devices/virtual/block/md[01]` messages anymore, so I hope it is
fixed.

> > By the way, I'd tag this problem as RC.

I reassigned this report to mdadm and let the maintainer decide. It is
strange that not a lot of people complained so far.

> Forgot to mention, I am using here lvm on top of crypt.
>
> I have compared initrd images with old and new mdadm,
>
> $ diff -Naur --brief initrd.img-2.6.32-5-686.good.dir initrd.img-2.6.32-5-686.bad.dir/
> Files initrd.img-2.6.32-5-686.good.dir//sbin/mdadm and initrd.img-2.6.32-5-686.bad.dir//sbin/mdadm differ
> Files initrd.img-2.6.32-5-686.good.dir//scripts/local-top/mdadm and initrd.img-2.6.32-5-686.bad.dir//scripts/local-top/mdadm differ
>
> Seems that only differences are the mdadm binary and
> scripts/local-top/mdadm script,
>
> diff -Naur initrd.img-2.6.32-5-686.good.dir/scripts//local-top/mdadm initrd.img-2.6.32-5-686.bad.dir/scripts//local-top/mdadm
> --- initrd.img-2.6.32-5-686.good.dir/scripts//local-top/mdadm 2010-06-01 16:58:30.000000000 +0200
> +++ initrd.img-2.6.32-5-686.bad.dir/scripts//local-top/mdadm 2010-06-01 16:58:26.000000000 +0200
> @@ -76,8 +76,8 @@
>
> verbose && log_begin_msg "Assembling all MD arrays"
> extra_args='
> - [ -n "${MD_HOMEHOST:-}" ] && extra_args="--homehost='$MD_HOMEHOST'"
> - if $MDADM --assemble --scan --run --auto=yes $extra_args; then
> + [ -n "${MD_HOMEHOST:-}" ] && extra_args="--homehost=$MD_HOMEHOST"
> + if $MDADM --assemble --scan --run --auto=yes${extra_args:+ $extra_args};
> then
> verbose && log_success_msg "assembled all arrays."
> else
> log_failure_msg "failed to assemble all arrays."

Thanks for your analysis. I hope it will help the maintainers to decide
what is going on. If I can provide further information about my setup,
please tell me.


Thanks,

Paul
 

Thread Tools




All times are GMT. The time now is 07:53 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org