FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Debian > Debian User

 
 
LinkBack Thread Tools
 
Old 01-09-2009, 01:41 AM
 
Default Alex Samad RAID5 (mdadm) array hosed after grow operation

On Thu, 8 Jan 2009 21:12:18 +1100, "Alex Samad" <alex@samad.com.au>
said:
> On Wed, Jan 07, 2009 at 08:19:05PM -0800, whollygoat@letterboxes.org
> wrote:
> >
> > On Tue, 6 Jan 2009 09:17:46 +1100, "Neil Brown" <neilb@suse.de> said:
>
> [snip]
>
> > How should I have done the grow operation if not as above? The only
> > thing I see in man mdadm is the "-S" switch which seems to disassemble
> > the array. Maybe this is because I've only tried it on the degraded
> > array this problem has left with. At any rate, after
> >
> > mdadm -S /dev/md/0
> >
>
> [snip]
>
> >
> > Hope you can help,
>
> Hi
>
> I have grown raid5 arrays either by disk number or disk size, I have
> only ever used --grow and never used the -z option
>
> I would re copy the info over from the small drives to the large drives
> (if you can have all the drives in at one time that might be better.
>
> increase the partition size and then run --grow on the array. I have
> done this going from 250G -> 500G -> 750g -> 1T. although when I have
> done it, I fail one drive and then add the new drive, expand the
> partition size and re add it back into the array, once I have done all
> the drives I then ran the grow.
>

I'm not sure I uderstand what you mean. When you copy the info over and
then increase the partition size, are you doing something like dd
if=smalldrive
of=bigdrive, then using a tool like parted to resize the partition?

I put the large drives in (as hot spares) with a single raid partition
(type fd) that uses the entire disk, so I can't increase their size any.
Then when I failed the drive the data it contained was rebuilt to the
larger
hot spare.

But anyway, I don't think that is going to matter. The issue I am
trying to
solve is how to de-activate the bitmap. It was suggested on the
linux-raid
list that my problem may have been caused by running the grow op on an
active
bitmap and I can't see from "man mdadm" how to de-activate the bit map.

The only thing I see about deactivation is --stop and that disassembles
the
array, in which case I can't run the grow command. I read how to remove
the bitmap, but then I guess I would have to readd it after the grow op.
In
any case, I would like to get the figured out without too much
experimentation
because swapping drives in and out and rebuilding is pretty time
consuming so I
would really like to avoid fudging this up again.

Thanks for you help

goat
--

whollygoat@letterboxes.org

--
http://www.fastmail.fm - Access all of your messages and folders
wherever you are


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 
Old 01-09-2009, 09:45 AM
John Robinson
 
Default Alex Samad RAID5 (mdadm) array hosed after grow operation

On 09/01/2009 02:41, whollygoat@letterboxes.org wrote:

But anyway, I don't think that is going to matter. The issue I am
trying to
solve is how to de-activate the bitmap. It was suggested on the
linux-raid
list that my problem may have been caused by running the grow op on an
active
bitmap and I can't see from "man mdadm" how to de-activate the bit map.


man mdadm tells me:
[...]
-b, --bitmap=
Specify a file to store a write-intent bitmap in. The file should
not exist unless --force is also given. The same file should be provided
when assembling the array. If the word internal is given, then the
bitmap is stored with the metadata on the array, and so is replicated on
all devices. If the word none is given with --grow mode, then any bitmap
that is present is removed.


So I imagine you'd want to
# mdadm --grow /dev/mdX --bitmap=none
to de-activate the bitmap.

Cheers,

John.


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 
Old 01-13-2009, 02:46 AM
 
Default Alex Samad RAID5 (mdadm) array hosed after grow operation

On Fri, 09 Jan 2009 10:45:56 +0000, "John Robinson"
<john.robinson@anonymous.org.uk> said:
> On 09/01/2009 02:41, whollygoat@letterboxes.org wrote:
> > But anyway, I don't think that is going to matter. The issue I am
> > trying to
> > solve is how to de-activate the bitmap. It was suggested on the
> > linux-raid
> > list that my problem may have been caused by running the grow op on an
> > active
> > bitmap and I can't see from "man mdadm" how to de-activate the bit map.
>
> man mdadm tells me:
> [...]
> -b, --bitmap=
> Specify a file to store a write-intent bitmap in. The file should
> not exist unless --force is also given. The same file should be provided
> when assembling the array. If the word internal is given, then the
> bitmap is stored with the metadata on the array, and so is replicated on
> all devices. If the word none is given with --grow mode, then any bitmap
> that is present is removed.
>
> So I imagine you'd want to
> # mdadm --grow /dev/mdX --bitmap=none
> to de-activate the bitmap.

The question that came to mind when I read that in the docs,
was how to recreate the bitmap on an already created array
after nuking it with "none".

I guess I also had doubts because the reply I had a few iterations
back didn't say that I shouldn't have performed the grow operation
on an existant bitmap, but an active one, and I wasn't prepared to
make the leap from active/inactive to existant/non-existant.

But, this has all become moot anyway. When I put the original, smaller
drives back in, hoping to do the grow op overagain, I was faced with a
similar problem assembling the array, so I'm guessing the problem
caused by something other than the grow. I put the larger drives in,
zeroed them, and am in the process of recreating the array and
file systems to be populated from backups.

Thanks for the input.

goat
--

whollygoat@letterboxes.org

--
http://www.fastmail.fm - Faster than the air-speed velocity of an
unladen european swallow


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 
Old 01-13-2009, 03:07 AM
Alex Samad
 
Default Alex Samad RAID5 (mdadm) array hosed after grow operation

On Mon, Jan 12, 2009 at 07:46:08PM -0800, whollygoat@letterboxes.org wrote:
>
> On Fri, 09 Jan 2009 10:45:56 +0000, "John Robinson"
> <john.robinson@anonymous.org.uk> said:
> > On 09/01/2009 02:41, whollygoat@letterboxes.org wrote:

[snip]

>
> But, this has all become moot anyway. When I put the original, smaller
> drives back in, hoping to do the grow op overagain, I was faced with a
> similar problem assembling the array, so I'm guessing the problem
> caused by something other than the grow. I put the larger drives in,
> zeroed them, and am in the process of recreating the array and
> file systems to be populated from backups.

just fell into same boat, 3 drives in a 10 drive raid6 died at the same
time on me, and I was unable to recreate the raid6 so back to the
backup machine

to answer your question about the smaller disks, there is an option with
create that says the drives are okay and not to prep them

--assume-clean

so you can recreate the array without over writing stuff

>
> Thanks for the input.
>
> goat
> --
>
> whollygoat@letterboxes.org
>
> --
> http://www.fastmail.fm - Faster than the air-speed velocity of an
> unladen european swallow
>
>
> --
> To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
>
>

--
"The point now is how do we work together to achieve important goals. And one such goal is a democracy in Germany"

- George W. Bush
05/05/2006
Washington, DC
 

Thread Tools




All times are GMT. The time now is 07:37 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org