FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

 
 
LinkBack Thread Tools
 
Old 04-21-2010, 08:42 PM
Mike Snitzer
 
Default ext3/4: enhance fsync performance when using cfq

On Thu, Apr 8, 2010 at 10:09 AM, Jens Axboe <jens.axboe@oracle.com> wrote:
> On Thu, Apr 08 2010, Vivek Goyal wrote:
>> On Thu, Apr 08, 2010 at 01:04:42PM +0200, Jens Axboe wrote:
>> > On Wed, Apr 07 2010, Vivek Goyal wrote:
>> > > On Wed, Apr 07, 2010 at 05:18:12PM -0400, Jeff Moyer wrote:
>> > > > Hi again,
>> > > >
>> > > > So, here's another stab at fixing this. *This patch is very much an RFC,
>> > > > so do not pull it into anything bound for Linus. *;-) *For those new to
>> > > > this topic, here is the original posting: *http://lkml.org/lkml/2010/4/1/344
>> > > >
>> > > > The basic problem is that, when running iozone on smallish files (up to
>> > > > 8MB in size) and including fsync in the timings, deadline outperforms
>> > > > CFQ by a factor of about 5 for 64KB files, and by about 10% for 8MB
>> > > > files. *From examining the blktrace data, it appears that iozone will
>> > > > issue an fsync() call, and will have to wait until it's CFQ timeslice
>> > > > has expired before the journal thread can run to actually commit data to
>> > > > disk.
>> > > >
>> > > > The approach below puts an explicit call into the filesystem-specific
>> > > > fsync code to yield the disk so that the jbd[2] process has a chance to
>> > > > issue I/O. *This bring performance of CFQ in line with deadline.
>> > > >
>> > > > There is one outstanding issue with the patch that Vivek pointed out.
>> > > > Basically, this could starve out the sync-noidle workload if there is a
>> > > > lot of fsync-ing going on. *I'll address that in a follow-on patch. *For
>> > > > now, I wanted to get the idea out there for others to comment on.
>> > > >
>> > > > Thanks a ton to Vivek for spotting the problem with the initial
>> > > > approach, and for his continued review.
>> > > >
...
>> > > So we got to take care of two issues now.
>> > >
>> > > - Make it work with dm/md devices also. Somehow shall have to propogate
>> > > * this yield semantic down the stack.
>> >
>> > The way that Jeff set it up, it's completely parallel to eg congestion
>> > or unplugging. So that should be easily doable.
>> >
>>
>> Ok, so various dm targets now need to define "yield_fn" and propogate the
>> yield call to all the component devices.
>
> Exactly.

To do so doesn't DM (and MD) need a blk_queue_yield() setter to
establish its own yield_fn? The established dm_yield_fn would call
blk_yield() for all real devices in a given DM target. Something like
how blk_queue_merge_bvec() or blk_queue_make_request() allow DM to
provide functional extensions.

I'm not seeing such a yield_fn hook for stacking drivers to use. And
as is, jbd and jbd2 just call blk_yield() directly and there is no way
for the block layer to call into DM.

What am I missing?

Thanks,
Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 

Thread Tools




All times are GMT. The time now is 10:12 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org