Linux Archive

Linux Archive (http://www.linux-archive.org/)
-   Device-mapper Development (http://www.linux-archive.org/device-mapper-development/)
-   -   jbd2: Modify ASYNC_COMMIT code to not rely on queue draining on barrier (http://www.linux-archive.org/device-mapper-development/423463-jbd2-modify-async_commit-code-not-rely-queue-draining-barrier.html)

Andreas Dilger 09-06-2010 11:15 AM

jbd2: Modify ASYNC_COMMIT code to not rely on queue draining on barrier
 
On 2010-08-26, at 10:23, Tejun Heo wrote:
> From 49f4cef00a1bd3c79fb2fe1f982c5157f0792867 Mon Sep 17 00:00:00 2001
> From: Jan Kara <jack@suse.cz>
>
> Currently JBD2 relies blkdev_issue_flush() draining the queue when ASYNC_COMMIT
> feature is set. This property is going away so make JBD2 wait for buffers it
> needs on its own before submitting the cache flush.

I finally had a chance to look at this patch more closely, and I think it may be breaking the ASYNC_COMMIT functionality, by forcing a wait for all of the data blocks _before_ the journal commit block is even submitted, even though ASYNC_COMMIT is enabled.

When ASYNC_COMMIT is enabled, it means that the journal transaction coherency is handled by the commit block checksum of the transaction data blocks, so the commit block can be submitted to the journal at the same time as the transaction data blocks. The flush on the journal device (and the filesystem device, if they are separate) should happen after both are submitted.

However, if ASYNC_COMMIT is NOT enabled, then the transaction data blocks should be submitted and flushed before the journal commit block is submitted, and then there should be a second cache flush afterward.

---
> This patch is necessary before enabling flush/fua support in jbd2.
> The flush-fua git tree has been udpated to included this between patch
> 24 and 25.
>
> Thanks.
>
> fs/jbd2/commit.c | 29 ++++++++++++++++-------------
> 1 files changed, 16 insertions(+), 13 deletions(-)
>
> diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
> index 7c068c1..8797fd1 100644
> --- a/fs/jbd2/commit.c
> +++ b/fs/jbd2/commit.c
> @@ -701,6 +701,16 @@ start_journal_io:
> }
> }
>
> + err = journal_finish_inode_data_buffers(journal, commit_transaction);
> + if (err) {
> + printk(KERN_WARNING
> + "JBD2: Detected IO errors while flushing file data "
> + "on %s
", journal->j_devname);
> + if (journal->j_flags & JBD2_ABORT_ON_SYNCDATA_ERR)
> + jbd2_journal_abort(journal, err);
> + err = 0;
> + }
> +
> /*
> * If the journal is not located on the file system device,
> * then we must flush the file system device before we issue
> @@ -719,19 +729,6 @@ start_journal_io:
> &cbh, crc32_sum);
> if (err)
> __jbd2_journal_abort_hard(journal);
> - if (journal->j_flags & JBD2_BARRIER)
> - blkdev_issue_flush(journal->j_dev, GFP_KERNEL, NULL,
> - BLKDEV_IFL_WAIT);
> - }
> -
> - err = journal_finish_inode_data_buffers(journal, commit_transaction);
> - if (err) {
> - printk(KERN_WARNING
> - "JBD2: Detected IO errors while flushing file data "
> - "on %s
", journal->j_devname);
> - if (journal->j_flags & JBD2_ABORT_ON_SYNCDATA_ERR)
> - jbd2_journal_abort(journal, err);
> - err = 0;
> }
>
> /* Lo and behold: we have just managed to send a transaction to
> @@ -845,6 +842,12 @@ wait_for_iobuf:
> }
> if (!err && !is_journal_aborted(journal))
> err = journal_wait_on_commit_record(journal, cbh);
> + if (JBD2_HAS_INCOMPAT_FEATURE(journal,
> + JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT) &&
> + journal->j_flags & JBD2_BARRIER) {
> + blkdev_issue_flush(journal->j_dev, GFP_KERNEL, NULL,
> + BLKDEV_IFL_WAIT);
> + }
>
> if (err)
> jbd2_journal_abort(journal, err);
> --
> 1.7.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html


Cheers, Andreas






--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


All times are GMT. The time now is 12:42 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.