FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

 
 
LinkBack Thread Tools
 
Old 01-18-2012, 12:44 PM
Jim Meyering
 
Default blockdev --flushbufs required [was: parted issue/question

[Following up on this thread:
http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/14999]

Alasdair G Kergon wrote:
> Try
> blkdev --flushbufs
> after any cmd that writes to a dev to see if that makes any difference.

Thanks for the work-around.
Using "blockdev --flushbufs $dev" does indeed make parted
behave the same with dm-backed storage as with other devices.

Adjusting my small example,

cd /tmp; truncate -s 10m g && loop=$(losetup --show -f g)
echo 0 100 linear $loop 0 | dmsetup create zub
dev=/dev/mapper/zub
parted -s $dev
mklabel gpt
mkpart efi 34s 34s
mkpart root 35s 35s
mkpart roo2 36s 36s
u s p
blockdev --flushbufs $dev # FIXME: required with device-mapper-1.02.65-5

# write random bits to p1
dd of=${dev}p1 if=/dev/urandom count=1
dd if=${dev}p1 of=p1-copy.pre count=1
parted -s $dev mkpart p4 37s 37s
blockdev --flushbufs $dev # FIXME: required with device-mapper-1.02.65-5

dd if=${dev}p1 of=p1-copy.post count=1
cmp -l p1-copy.pre p1-copy.post

With that, the "cmp" show no differences.

Does this sound like a problem in device-mapper land,
or in how parted interacts with DM?

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 01-18-2012, 12:58 PM
Mike Burns
 
Default blockdev --flushbufs required [was: parted issue/question

Thanks Jim

Moving to correct ovirt-node mailing list (node-devel@ovirt.org)

On Wed, 2012-01-18 at 14:44 +0100, Jim Meyering wrote:
> [Following up on this thread:
> http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/14999]
>
> Alasdair G Kergon wrote:
> > Try
> > blkdev --flushbufs
> > after any cmd that writes to a dev to see if that makes any difference.
>
> Thanks for the work-around.
> Using "blockdev --flushbufs $dev" does indeed make parted
> behave the same with dm-backed storage as with other devices.
>
> Adjusting my small example,
>
> cd /tmp; truncate -s 10m g && loop=$(losetup --show -f g)
> echo 0 100 linear $loop 0 | dmsetup create zub
> dev=/dev/mapper/zub
> parted -s $dev
> mklabel gpt
> mkpart efi 34s 34s
> mkpart root 35s 35s
> mkpart roo2 36s 36s
> u s p
> blockdev --flushbufs $dev # FIXME: required with device-mapper-1.02.65-5
>
> # write random bits to p1
> dd of=${dev}p1 if=/dev/urandom count=1
> dd if=${dev}p1 of=p1-copy.pre count=1
> parted -s $dev mkpart p4 37s 37s
> blockdev --flushbufs $dev # FIXME: required with device-mapper-1.02.65-5
>
> dd if=${dev}p1 of=p1-copy.post count=1
> cmp -l p1-copy.pre p1-copy.post
>
> With that, the "cmp" show no differences.
>
> Does this sound like a problem in device-mapper land,
> or in how parted interacts with DM?
>
> _______________________________________________
> ovirt-devel mailing list
> ovirt-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/ovirt-devel



--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 01-18-2012, 03:22 PM
Alan Pevec
 
Default blockdev --flushbufs required [was: parted issue/question

> On Wed, 2012-01-18 at 14:44 +0100, Jim Meyering wrote:
>> [Following up on this thread:
>> *http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/14999]
>> Alasdair G Kergon wrote:
>> > Try
>> > * blkdev --flushbufs
>> > after any cmd that writes to a dev to see if that makes any difference.
>>
>> Thanks for the work-around.
>> Using "blockdev --flushbufs $dev" does indeed make parted
>> behave the same with dm-backed storage as with other devices.

Thanks Jim!
That reminds me we've already seen something similar and there's still
workaround with drop_caches in ovirt-config-boot installer:
# flush to sync DM and blockdev, workaround from rhbz#623846#c14
echo 3 > /proc/sys/vm/drop_caches

But 623846 was supposed to be fixed in RHEL 6.0 ?

Alan

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 

Thread Tools




All times are GMT. The time now is 07:33 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org