FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

 
 
LinkBack Thread Tools
 
Old 03-01-2010, 11:23 PM
Mike Snitzer
 
Default mikulas' shared snapshot patches

Mikulas,

This is just the full submit of your shared snapshot patches from:
http://people.redhat.com/mpatocka/patches/kernel/new-snapshots/r15/

I think the next phase of review should possibly be driven through the
dm-devel mailing list. I'd at least like the option of exchanging
mail on aspects of some of these patches.

The first patch has one small cleanup in do_origin_write(): I
eliminated the 'midcycle' goto.

But the primary difference with this submission (when compared to your
r15 patches) is I editted the patches for whitespace and typos. I'm
_really_ not trying to step on your hard work by doing this
superficial stuff. But while reviewing the code the insanely long
lines really were distracting. I tried very hard to preserve the
intent of the DM_MULTISNAP_SET_ERROR/DM_ERR messages by still having
grep'able content (on a single line).

I also didn't go crazy like a checkpatch.pl zealot.. I didn't even run
these patches through checkpatch!

I know how sensitive you are about allowing the editor do the wrapping
but I trully think the length of some lines would never get past
Alasdair (or Linus) -- even though they have relaxed the rules for
line length.

I'll respond to this cover-letter with a single incremental patch that
shows my edits.

All my edits aside; I must say I'm impressed at the amount/complexity
of code you've cranked out for this shared snapshot support. It is
going to take me many more review iterations of these patches before
I'll be able to say I understand all that these patches achieve.

I think drivers/md/dm-bufio.c will be controversial (to the greater
upstream community) but I understand that it enabled you to focus on
the problem of shared snapshots without having to concern yourself
with core VM and block changes to accomplish the same.

Mikulas Patocka (14):
dm-multisnap-common
dm-bufio
dm-multisnap-mikulas-headers
dm-multisnap-mikulas-alloc
dm-multisnap-mikulas-blocks
dm-multisnap-mikulas-btree
dm-multisnap-mikulas-commit
dm-multisnap-mikulas-delete
dm-multisnap-mikulas-freelist
dm-multisnap-mikulas-io
dm-multisnap-mikulas-snaps
dm-multisnap-mikulas-common
dm-multisnap-mikulas-config
dm-multisnap-daniel

Documentation/device-mapper/dm-multisnapshot.txt | 77 +
drivers/md/Kconfig | 33 +
drivers/md/Makefile | 10 +
drivers/md/dm-bufio.c | 987 +++++++++++
drivers/md/dm-bufio.h | 35 +
drivers/md/dm-multisnap-alloc.c | 590 +++++++
drivers/md/dm-multisnap-blocks.c | 333 ++++
drivers/md/dm-multisnap-btree.c | 838 +++++++++
drivers/md/dm-multisnap-commit.c | 245 +++
drivers/md/dm-multisnap-daniel.c | 1711 ++++++++++++++++++
drivers/md/dm-multisnap-delete.c | 137 ++
drivers/md/dm-multisnap-freelist.c | 296 ++++
drivers/md/dm-multisnap-io.c | 209 +++
drivers/md/dm-multisnap-mikulas-struct.h | 380 ++++
drivers/md/dm-multisnap-mikulas.c | 760 ++++++++
drivers/md/dm-multisnap-mikulas.h | 247 +++
drivers/md/dm-multisnap-private.h | 161 ++
drivers/md/dm-multisnap-snaps.c | 636 +++++++
drivers/md/dm-multisnap.c | 2007 ++++++++++++++++++++++
drivers/md/dm-multisnap.h | 183 ++
20 files changed, 9875 insertions(+), 0 deletions(-)
create mode 100644 Documentation/device-mapper/dm-multisnapshot.txt
create mode 100644 drivers/md/dm-bufio.c
create mode 100644 drivers/md/dm-bufio.h
create mode 100644 drivers/md/dm-multisnap-alloc.c
create mode 100644 drivers/md/dm-multisnap-blocks.c
create mode 100644 drivers/md/dm-multisnap-btree.c
create mode 100644 drivers/md/dm-multisnap-commit.c
create mode 100644 drivers/md/dm-multisnap-daniel.c
create mode 100644 drivers/md/dm-multisnap-delete.c
create mode 100644 drivers/md/dm-multisnap-freelist.c
create mode 100644 drivers/md/dm-multisnap-io.c
create mode 100644 drivers/md/dm-multisnap-mikulas-struct.h
create mode 100644 drivers/md/dm-multisnap-mikulas.c
create mode 100644 drivers/md/dm-multisnap-mikulas.h
create mode 100644 drivers/md/dm-multisnap-private.h
create mode 100644 drivers/md/dm-multisnap-snaps.c
create mode 100644 drivers/md/dm-multisnap.c
create mode 100644 drivers/md/dm-multisnap.h

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 03-01-2010, 11:32 PM
Mike Snitzer
 
Default mikulas' shared snapshot patches

On Mon, Mar 01 2010 at 7:23pm -0500,
Mike Snitzer <snitzer@redhat.com> wrote:

> But the primary difference with this submission (when compared to your
> r15 patches) is I editted the patches for whitespace and typos. I'm
> _really_ not trying to step on your hard work by doing this
> superficial stuff. But while reviewing the code the insanely long
> lines really were distracting. I tried very hard to preserve the
> intent of the DM_MULTISNAP_SET_ERROR/DM_ERR messages by still having
> grep'able content (on a single line).
>
> I also didn't go crazy like a checkpatch.pl zealot.. I didn't even run
> these patches through checkpatch!
>
> I know how sensitive you are about allowing the editor do the wrapping
> but I trully think the length of some lines would never get past
> Alasdair (or Linus) -- even though they have relaxed the rules for
> line length.
>
> I'll respond to this cover-letter with a single incremental patch that
> shows my edits.

As promised:

drivers/md/dm-bufio.c | 132 +++++++---------
drivers/md/dm-bufio.h | 11 +-
drivers/md/dm-multisnap-alloc.c | 75 +++++----
drivers/md/dm-multisnap-blocks.c | 57 ++++---
drivers/md/dm-multisnap-btree.c | 253 +++++++++++++++++-------------
drivers/md/dm-multisnap-commit.c | 36 +++--
drivers/md/dm-multisnap-daniel.c | 52 ++++---
drivers/md/dm-multisnap-delete.c | 11 +-
drivers/md/dm-multisnap-freelist.c | 45 +++---
drivers/md/dm-multisnap-io.c | 26 ++--
drivers/md/dm-multisnap-mikulas-struct.h | 24 ++--
drivers/md/dm-multisnap-mikulas.c | 119 +++++++++------
drivers/md/dm-multisnap-mikulas.h | 88 +++++++---
drivers/md/dm-multisnap-snaps.c | 135 ++++++++++-------
drivers/md/dm-multisnap.c | 203 +++++++++++++-----------
drivers/md/dm-multisnap.h | 51 ++++--
16 files changed, 755 insertions(+), 563 deletions(-)

diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index c158622..44dbb0e 100644
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -40,10 +40,10 @@
*
* In case of memory pressure, the buffer may be written after
* dm_bufio_mark_buffer_dirty, but before dm_bufio_write_dirty_buffers.
- * So, dm_bufio_write_dirty_buffers guarantees that the buffer is on-disk,
+ * So dm_bufio_write_dirty_buffers guarantees that the buffer is on-disk
* but the actual writing may occur earlier.
*
- * dm_bufio_release_move --- like dm_bufio_release, and also move the buffer to
+ * dm_bufio_release_move --- like dm_bufio_release but also move the buffer to
* the new block. dm_bufio_write_dirty_buffers is needed to commit the new
* block.
* dm_bufio_drop_buffers --- clear all buffers.
@@ -76,7 +76,7 @@

/*
* Don't try to kmalloc blocks larger than this.
- * For exaplanation, see dm_bufio_alloc_buffer_data below.
+ * For explanation, see dm_bufio_alloc_buffer_data below.
*/
#define DM_BUFIO_BLOCK_SIZE_KMALLOC_LIMIT PAGE_SIZE

@@ -95,12 +95,11 @@ struct dm_bufio_client {
* are linked to lru with their lru_list field.
* dirty and clean buffers that are being written are linked
* to dirty_lru with their lru_list field. When the write
- * finishes, the buffer cannot be immediatelly relinked
+ * finishes, the buffer cannot be immediately relinked
* (because we are in an interrupt context and relinking
* requires process context), so some clean-not-writing
* buffers can be held on dirty_lru too. They are later
- * added to
- * lru in the process context.
+ * added to lru in the process context.
*/
struct list_head lru;
struct list_head dirty_lru;
@@ -124,7 +123,7 @@ struct dm_bufio_client {
};

/*
- * A method, with wich the data is allocated:
+ * A method, with which the data is allocated:
* kmalloc(), __get_free_pages() or vmalloc().
* See the comment at dm_bufio_alloc_buffer_data.
*/
@@ -158,22 +157,23 @@ struct dm_buffer {
* __get_free_pages can randomly fail, if the memory is fragmented.
* __vmalloc won't randomly fail, but vmalloc space is limited (it may be
* as low as 128M) --- so using it for caching is not appropriate.
- * If the allocation may fail, we use __get_free_pages, memory fragmentation
+ * If the allocation may fail we use __get_free_pages. Memory fragmentation
* won't have fatal effect here, it just causes flushes of some other
* buffers and more I/O will be performed.
- * If the allocation shouldn't fail, we use __vmalloc. This is only for
+ * If the allocation shouldn't fail we use __vmalloc. This is only for
* the initial reserve allocation, so there's no risk of wasting
* all vmalloc space.
*/
-
-static void *dm_bufio_alloc_buffer_data(struct dm_bufio_client *c, gfp_t gfp_mask, char *data_mode)
+static void *dm_bufio_alloc_buffer_data(struct dm_bufio_client *c,
+ gfp_t gfp_mask, char *data_mode)
{
if (c->block_size <= DM_BUFIO_BLOCK_SIZE_KMALLOC_LIMIT) {
*data_mode = DATA_MODE_KMALLOC;
return kmalloc(c->block_size, gfp_mask);
} else if (gfp_mask & __GFP_NORETRY) {
*data_mode = DATA_MODE_GET_FREE_PAGES;
- return (void *)__get_free_pages(gfp_mask, c->pages_per_block_bits);
+ return (void *)__get_free_pages(gfp_mask,
+ c->pages_per_block_bits);
} else {
*data_mode = DATA_MODE_VMALLOC;
return __vmalloc(c->block_size, gfp_mask, PAGE_KERNEL);
@@ -183,8 +183,8 @@ static void *dm_bufio_alloc_buffer_data(struct dm_bufio_client *c, gfp_t gfp_mas
/*
* Free buffer's data.
*/
-
-static void dm_bufio_free_buffer_data(struct dm_bufio_client *c, void *data, char data_mode)
+static void dm_bufio_free_buffer_data(struct dm_bufio_client *c,
+ void *data, char data_mode)
{
switch (data_mode) {

@@ -198,17 +198,16 @@ static void dm_bufio_free_buffer_data(struct dm_bufio_client *c, void *data, cha
vfree(data);
break;
default:
- printk(KERN_CRIT "dm_bufio_free_buffer_data: bad data mode: %d", data_mode);
+ printk(KERN_CRIT "dm_bufio_free_buffer_data: bad data mode: %d",
+ data_mode);
BUG();

}
}

-
/*
* Allocate buffer and its data.
*/
-
static struct dm_buffer *alloc_buffer(struct dm_bufio_client *c, gfp_t gfp_mask)
{
struct dm_buffer *b;
@@ -227,7 +226,6 @@ static struct dm_buffer *alloc_buffer(struct dm_bufio_client *c, gfp_t gfp_mask)
/*
* Free buffer and its data.
*/
-
static void free_buffer(struct dm_buffer *b)
{
dm_bufio_free_buffer_data(b->c, b->data, b->data_mode);
@@ -238,7 +236,6 @@ static void free_buffer(struct dm_buffer *b)
/*
* Link buffer to the hash list and clean or dirty queue.
*/
-
static void link_buffer(struct dm_buffer *b, sector_t block, int dirty)
{
struct dm_bufio_client *c = b->c;
@@ -251,7 +248,6 @@ static void link_buffer(struct dm_buffer *b, sector_t block, int dirty)
/*
* Unlink buffer from the hash list and dirty or clean queue.
*/
-
static void unlink_buffer(struct dm_buffer *b)
{
BUG_ON(!b->c->n_buffers);
@@ -263,7 +259,6 @@ static void unlink_buffer(struct dm_buffer *b)
/*
* Place the buffer to the head of dirty or clean LRU queue.
*/
-
static void relink_lru(struct dm_buffer *b, int dirty)
{
struct dm_bufio_client *c = b->c;
@@ -276,7 +271,6 @@ static void relink_lru(struct dm_buffer *b, int dirty)
* It unplugs the underlying block device, so that coalesced I/Os in
* the request queue are dispatched to the device.
*/
-
static int do_io_schedule(void *word)
{
struct dm_buffer *b = container_of(word, struct dm_buffer, state);
@@ -297,7 +291,6 @@ static void write_dirty_buffer(struct dm_buffer *b);
* When this function finishes, there is no I/O running on the buffer
* and the buffer is not dirty.
*/
-
static void make_buffer_clean(struct dm_buffer *b)
{
BUG_ON(b->hold_count);
@@ -311,9 +304,8 @@ static void make_buffer_clean(struct dm_buffer *b)
/*
* Find some buffer that is not held by anybody, clean it, unlink it and
* return it.
- * If "wait" is zero, try less harder and don't block.
+ * If "wait" is zero, try less hard and don't block.
*/
-
static struct dm_buffer *get_unclaimed_buffer(struct dm_bufio_client *c, int wait)
{
struct dm_buffer *b;
@@ -354,7 +346,6 @@ static struct dm_buffer *get_unclaimed_buffer(struct dm_bufio_client *c, int wai
* This function is entered with c->lock held, drops it and regains it before
* exiting.
*/
-
static void wait_for_free_buffer(struct dm_bufio_client *c)
{
DECLARE_WAITQUEUE(wait, current);
@@ -377,7 +368,6 @@ static void wait_for_free_buffer(struct dm_bufio_client *c)
*
* May drop the lock and regain it.
*/
-
static struct dm_buffer *alloc_buffer_wait(struct dm_bufio_client *c)
{
struct dm_buffer *b;
@@ -413,7 +403,6 @@ retry:
/*
* Free a buffer and wake other threads waiting for free buffers.
*/
-
static void free_buffer_wake(struct dm_buffer *b)
{
struct dm_bufio_client *c = b->c;
@@ -433,7 +422,6 @@ static void free_buffer_wake(struct dm_buffer *b)
* If we are over threshold_buffers, start freeing buffers.
* If we're over "limit_buffers", blocks until we get under the limit.
*/
-
static void check_watermark(struct dm_bufio_client *c)
{
while (c->n_buffers > c->threshold_buffers) {
@@ -462,14 +450,15 @@ static void dm_bufio_dmio_complete(unsigned long error, void *context);
* it is not vmalloc()ated, try using the bio interface.
*
* If the buffer is big, if it is vmalloc()ated or if the underlying device
- * rejects the bio because it is too large, use dmio layer to do the I/O.
+ * rejects the bio because it is too large, use dm-io layer to do the I/O.
* dmio layer splits the I/O to multiple requests, solving the above
- * shorcomings.
+ * shortcomings.
*/
-
-static void dm_bufio_submit_io(struct dm_buffer *b, int rw, sector_t block, bio_end_io_t *end_io)
+static void dm_bufio_submit_io(struct dm_buffer *b, int rw, sector_t block,
+ bio_end_io_t *end_io)
{
- if (b->c->block_size <= DM_BUFIO_INLINE_VECS * PAGE_SIZE && b->data_mode != DATA_MODE_VMALLOC) {
+ if (b->c->block_size <= DM_BUFIO_INLINE_VECS * PAGE_SIZE &&
+ b->data_mode != DATA_MODE_VMALLOC) {
char *ptr;
int len;
bio_init(&b->bio);
@@ -486,7 +475,9 @@ static void dm_bufio_submit_io(struct dm_buffer *b, int rw, sector_t block, bio_
ptr = b->data;
len = b->c->block_size;
do {
- if (!bio_add_page(&b->bio, virt_to_page(ptr), len < PAGE_SIZE ? len : PAGE_SIZE, virt_to_phys(ptr) & (PAGE_SIZE - 1))) {
+ if (!bio_add_page(&b->bio, virt_to_page(ptr),
+ len < PAGE_SIZE ? len : PAGE_SIZE,
+ virt_to_phys(ptr) & (PAGE_SIZE - 1))) {
BUG_ON(b->c->block_size <= PAGE_SIZE);
goto use_dmio;
}
@@ -526,7 +517,6 @@ use_dmio : {
* dm-io completion routine. It just calls b->bio.bi_end_io, pretending
* that the request was handled directly with bio interface.
*/
-
static void dm_bufio_dmio_complete(unsigned long error, void *context)
{
struct dm_buffer *b = context;
@@ -537,7 +527,6 @@ static void dm_bufio_dmio_complete(unsigned long error, void *context)
}

/* Find a buffer in the hash. */
-
static struct dm_buffer *dm_bufio_find(struct dm_bufio_client *c, sector_t block)
{
struct dm_buffer *b;
@@ -559,8 +548,8 @@ static void read_endio(struct bio *bio, int error);
* doesn't read the buffer from the disk (assuming that the caller overwrites
* all the data and uses dm_bufio_mark_buffer_dirty to write new data back).
*/
-
-static void *dm_bufio_new_read(struct dm_bufio_client *c, sector_t block, struct dm_buffer **bp, int read)
+static void *dm_bufio_new_read(struct dm_bufio_client *c, sector_t block,
+ struct dm_buffer **bp, int read)
{
struct dm_buffer *b, *new_b = NULL;

@@ -572,11 +561,13 @@ retry_search:
if (new_b)
free_buffer_wake(new_b);
b->hold_count++;
- relink_lru(b, test_bit(B_DIRTY, &b->state) || test_bit(B_WRITING, &b->state));
+ relink_lru(b, test_bit(B_DIRTY, &b->state) ||
+ test_bit(B_WRITING, &b->state));
unlock_wait_ret:
mutex_unlock(&c->lock);
wait_ret:
- wait_on_bit(&b->state, B_READING, do_io_schedule, TASK_UNINTERRUPTIBLE);
+ wait_on_bit(&b->state, B_READING,
+ do_io_schedule, TASK_UNINTERRUPTIBLE);
if (b->read_error) {
int error = b->read_error;
dm_bufio_release(b);
@@ -613,16 +604,16 @@ wait_ret:
}

/* Read the buffer and hold reference on it */
-
-void *dm_bufio_read(struct dm_bufio_client *c, sector_t block, struct dm_buffer **bp)
+void *dm_bufio_read(struct dm_bufio_client *c, sector_t block,
+ struct dm_buffer **bp)
{
return dm_bufio_new_read(c, block, bp, 1);
}
EXPORT_SYMBOL(dm_bufio_read);

/* Get the buffer with possibly invalid data and hold reference on it */
-
-void *dm_bufio_new(struct dm_bufio_client *c, sector_t block, struct dm_buffer **bp)
+void *dm_bufio_new(struct dm_bufio_client *c, sector_t block,
+ struct dm_buffer **bp)
{
return dm_bufio_new_read(c, block, bp, 0);
}
@@ -632,7 +623,6 @@ EXPORT_SYMBOL(dm_bufio_new);
* The endio routine for reading: set the error, clear the bit and wake up
* anyone waiting on the buffer.
*/
-
static void read_endio(struct bio *bio, int error)
{
struct dm_buffer *b = container_of(bio, struct dm_buffer, bio);
@@ -647,7 +637,6 @@ static void read_endio(struct bio *bio, int error)
/*
* Release the reference held on the buffer.
*/
-
void dm_bufio_release(struct dm_buffer *b)
{
struct dm_bufio_client *c = b->c;
@@ -677,7 +666,6 @@ EXPORT_SYMBOL(dm_bufio_release);
* Mark that the data in the buffer were modified and the buffer needs to
* be written back.
*/
-
void dm_bufio_mark_buffer_dirty(struct dm_buffer *b)
{
struct dm_bufio_client *c = b->c;
@@ -701,13 +689,13 @@ static void write_endio(struct bio *bio, int error);
* Finally, submit our write and don't wait on it. We set B_WRITING indicating
* that there is a write in progress.
*/
-
static void write_dirty_buffer(struct dm_buffer *b)
{
if (!test_bit(B_DIRTY, &b->state))
return;
clear_bit(B_DIRTY, &b->state);
- wait_on_bit_lock(&b->state, B_WRITING, do_io_schedule, TASK_UNINTERRUPTIBLE);
+ wait_on_bit_lock(&b->state, B_WRITING,
+ do_io_schedule, TASK_UNINTERRUPTIBLE);
dm_bufio_submit_io(b, WRITE, b->block, write_endio);
}

@@ -715,7 +703,6 @@ static void write_dirty_buffer(struct dm_buffer *b)
* The endio routine for write.
* Set the error, clear B_WRITING bit and wake anyone who was waiting on it.
*/
-
static void write_endio(struct bio *bio, int error)
{
struct dm_buffer *b = container_of(bio, struct dm_buffer, bio);
@@ -734,7 +721,6 @@ static void write_endio(struct bio *bio, int error)
/*
* Start writing all the dirty buffers. Don't wait for results.
*/
-
void dm_bufio_write_dirty_buffers_async(struct dm_bufio_client *c)
{
struct dm_buffer *b;
@@ -756,7 +742,6 @@ EXPORT_SYMBOL(dm_bufio_write_dirty_buffers_async);
*
* Finally, we flush hardware disk cache.
*/
-
int dm_bufio_write_dirty_buffers(struct dm_bufio_client *c)
{
int a, f;
@@ -777,11 +762,13 @@ again:
dropped_lock = 1;
b->hold_count++;
mutex_unlock(&c->lock);
- wait_on_bit(&b->state, B_WRITING, do_io_schedule, TASK_UNINTERRUPTIBLE);
+ wait_on_bit(&b->state, B_WRITING,
+ do_io_schedule, TASK_UNINTERRUPTIBLE);
mutex_lock(&c->lock);
b->hold_count--;
} else
- wait_on_bit(&b->state, B_WRITING, do_io_schedule, TASK_UNINTERRUPTIBLE);
+ wait_on_bit(&b->state, B_WRITING,
+ do_io_schedule, TASK_UNINTERRUPTIBLE);
}
if (!test_bit(B_DIRTY, &b->state) && !test_bit(B_WRITING, &b->state))
relink_lru(b, 0);
@@ -794,7 +781,7 @@ again:
* relinked to the clean list, so we won't loop scanning the
* same buffer again and again.
*
- * This may livelock if there is other thread simultaneously
+ * This may livelock if there is another thread simultaneously
* dirtying buffers, so we count the number of buffers walked
* and if it exceeds the total number of buffers, it means that
* someone is doing some writes simultaneously with us --- in
@@ -817,7 +804,6 @@ EXPORT_SYMBOL(dm_bufio_write_dirty_buffers);
/*
* Use dm-io to send and empty barrier flush the device.
*/
-
int dm_bufio_issue_flush(struct dm_bufio_client *c)
{
struct dm_io_request io_req = {
@@ -849,7 +835,6 @@ EXPORT_SYMBOL(dm_bufio_issue_flush);
* location but not relink it, because that other user needs to have the buffer
* at the same place.
*/
-
void dm_bufio_release_move(struct dm_buffer *b, sector_t new_block)
{
struct dm_bufio_client *c = b->c;
@@ -873,14 +858,17 @@ retry:
BUG_ON(test_bit(B_READING, &b->state));
write_dirty_buffer(b);
if (b->hold_count == 1) {
- wait_on_bit(&b->state, B_WRITING, do_io_schedule, TASK_UNINTERRUPTIBLE);
+ wait_on_bit(&b->state, B_WRITING,
+ do_io_schedule, TASK_UNINTERRUPTIBLE);
set_bit(B_DIRTY, &b->state);
unlink_buffer(b);
link_buffer(b, new_block, 1);
} else {
- wait_on_bit_lock(&b->state, B_WRITING, do_io_schedule, TASK_UNINTERRUPTIBLE);
+ wait_on_bit_lock(&b->state, B_WRITING,
+ do_io_schedule, TASK_UNINTERRUPTIBLE);
dm_bufio_submit_io(b, WRITE, new_block, write_endio);
- wait_on_bit(&b->state, B_WRITING, do_io_schedule, TASK_UNINTERRUPTIBLE);
+ wait_on_bit(&b->state, B_WRITING,
+ do_io_schedule, TASK_UNINTERRUPTIBLE);
}
mutex_unlock(&c->lock);
dm_bufio_release(b);
@@ -889,15 +877,14 @@ EXPORT_SYMBOL(dm_bufio_release_move);

/*
* Free all the buffers (and possibly write them if they were dirty)
- * It is required that the calling theread doesn't have any reference on
+ * It is required that the calling thread doesn't have any reference on
* any buffer.
*/
-
void dm_bufio_drop_buffers(struct dm_bufio_client *c)
{
struct dm_buffer *b;

- /* an optimization ... so that the buffers are not writte one-by-one */
+ /* an optimization ... so that the buffers are not written one-by-one */
dm_bufio_write_dirty_buffers_async(c);

mutex_lock(&c->lock);
@@ -910,8 +897,9 @@ void dm_bufio_drop_buffers(struct dm_bufio_client *c)
EXPORT_SYMBOL(dm_bufio_drop_buffers);

/* Create the buffering interface */
-
-struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsigned blocksize, unsigned flags, __u64 cache_threshold, __u64 cache_limit)
+struct dm_bufio_client *
+dm_bufio_client_create(struct block_device *bdev, unsigned blocksize,
+ unsigned flags, __u64 cache_threshold, __u64 cache_limit)
{
int r;
struct dm_bufio_client *c;
@@ -928,7 +916,8 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
c->bdev = bdev;
c->block_size = blocksize;
c->sectors_per_block_bits = ffs(blocksize) - 1 - SECTOR_SHIFT;
- c->pages_per_block_bits = ffs(blocksize) - 1 >= PAGE_SHIFT ? ffs(blocksize) - 1 - PAGE_SHIFT : 0;
+ c->pages_per_block_bits = (ffs(blocksize) - 1 >= PAGE_SHIFT) ?
+ (ffs(blocksize) - 1 - PAGE_SHIFT) : 0;
INIT_LIST_HEAD(&c->lru);
INIT_LIST_HEAD(&c->dirty_lru);
for (i = 0; i < DM_BUFIO_HASH_SIZE; i++)
@@ -938,7 +927,8 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign

if (!cache_limit)
cache_limit = DM_BUFIO_LIMIT_MEMORY;
- c->limit_buffers = cache_limit >> (c->sectors_per_block_bits + SECTOR_SHIFT);
+ c->limit_buffers = cache_limit >>
+ (c->sectors_per_block_bits + SECTOR_SHIFT);
if (!c->limit_buffers)
c->limit_buffers = 1;

@@ -946,12 +936,11 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
cache_threshold = DM_BUFIO_THRESHOLD_MEMORY;
if (cache_threshold > cache_limit)
cache_threshold = cache_limit;
- c->threshold_buffers = cache_threshold >> (c->sectors_per_block_bits + SECTOR_SHIFT);
+ c->threshold_buffers = cache_threshold >>
+ (c->sectors_per_block_bits + SECTOR_SHIFT);
if (!c->threshold_buffers)
c->threshold_buffers = 1;

- /*printk("%d %d
", c->limit_buffers, c->threshold_buffers);*/
-
init_waitqueue_head(&c->free_buffer_wait);
c->async_write_error = 0;

@@ -983,7 +972,6 @@ EXPORT_SYMBOL(dm_bufio_client_create);
* Free the buffering interface.
* It is required that there are no references on any buffers.
*/
-
void dm_bufio_client_destroy(struct dm_bufio_client *c)
{
unsigned i;
diff --git a/drivers/md/dm-bufio.h b/drivers/md/dm-bufio.h
index 7abc035..3261ea2 100644
--- a/drivers/md/dm-bufio.h
+++ b/drivers/md/dm-bufio.h
@@ -12,8 +12,10 @@
struct dm_bufio_client;
struct dm_buffer;

-void *dm_bufio_read(struct dm_bufio_client *c, sector_t block, struct dm_buffer **bp);
-void *dm_bufio_new(struct dm_bufio_client *c, sector_t block, struct dm_buffer **bp);
+void *dm_bufio_read(struct dm_bufio_client *c, sector_t block,
+ struct dm_buffer **bp);
+void *dm_bufio_new(struct dm_bufio_client *c, sector_t block,
+ struct dm_buffer **bp);
void dm_bufio_release(struct dm_buffer *b);

void dm_bufio_mark_buffer_dirty(struct dm_buffer *b);
@@ -23,7 +25,10 @@ int dm_bufio_issue_flush(struct dm_bufio_client *c);

void dm_bufio_release_move(struct dm_buffer *b, sector_t new_block);

-struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsigned blocksize, unsigned flags, __u64 cache_threshold, __u64 cache_limit);
+struct dm_bufio_client *
+dm_bufio_client_create(struct block_device *bdev, unsigned blocksize,
+ unsigned flags, __u64 cache_threshold,
+ __u64 cache_limit);
void dm_bufio_client_destroy(struct dm_bufio_client *c);
void dm_bufio_drop_buffers(struct dm_bufio_client *c);

diff --git a/drivers/md/dm-multisnap-alloc.c b/drivers/md/dm-multisnap-alloc.c
index 482ed54..02f89be 100644
--- a/drivers/md/dm-multisnap-alloc.c
+++ b/drivers/md/dm-multisnap-alloc.c
@@ -16,7 +16,6 @@
/*
* Initialize the root bitmap, write it at the position "writing block".
*/
-
void dm_multisnap_create_bitmaps(struct dm_exception_store *s, chunk_t *writing_block)
{
struct dm_buffer *bp;
@@ -27,18 +26,23 @@ void dm_multisnap_create_bitmaps(struct dm_exception_store *s, chunk_t *writing_
(*writing_block)++;

if (*writing_block >= s->dev_size) {
- DM_MULTISNAP_SET_ERROR(s->dm, -ENOSPC, ("dm_multisnap_create_bitmaps: device is too small"));
+ DM_MULTISNAP_SET_ERROR(s->dm, -ENOSPC,
+ ("dm_multisnap_create_bitmaps: device is too small"));
return;
}

if (*writing_block >= s->chunk_size << BITS_PER_BYTE_SHIFT) {
- DM_MULTISNAP_SET_ERROR(s->dm, -ENOSPC, ("dm_multisnap_create_bitmaps: invalid block to write: %llx", (unsigned long long)*writing_block));
+ DM_MULTISNAP_SET_ERROR(s->dm, -ENOSPC,
+ ("dm_multisnap_create_bitmaps: invalid block to write: %llx",
+ (unsigned long long)*writing_block));
return;
}

bmp = dm_bufio_new(s->bufio, *writing_block, &bp);
if (IS_ERR(bmp)) {
- DM_MULTISNAP_SET_ERROR(s->dm, PTR_ERR(bmp), ("dm_multisnap_create_bitmaps: can't create direct bitmap block at %llx", (unsigned long long)*writing_block));
+ DM_MULTISNAP_SET_ERROR(s->dm, PTR_ERR(bmp),
+ ("dm_multisnap_create_bitmaps: can't create direct bitmap block at %llx",
+ (unsigned long long)*writing_block));
return;
}
cond_resched();
@@ -64,10 +68,9 @@ static void dm_multisnap_add_bitmap(struct dm_exception_store *s);
/*
* Extend bitmaps to cover "new_size" area.
*
- * While we extend bitmaps, we increase s->dev_size, so that the newly mapped
+ * While we extend bitmaps we increase s->dev_size so that the newly mapped
* space can be used to hold further bitmaps.
*/
-
void dm_multisnap_extend_bitmaps(struct dm_exception_store *s, chunk_t new_size)
{
while (s->dev_size < new_size) {
@@ -103,7 +106,6 @@ void dm_multisnap_extend_bitmaps(struct dm_exception_store *s, chunk_t new_size)
* Add one bitmap after the last bitmap. A helper function for
* dm_multisnap_extend_bitmaps
*/
-
static void dm_multisnap_add_bitmap(struct dm_exception_store *s)
{
struct path_element path[MAX_BITMAP_DEPTH];
@@ -171,8 +173,8 @@ static void dm_multisnap_add_bitmap(struct dm_exception_store *s)
* Return the pointer to the data, store the held buffer to bl.
* Return the block in block and path in path.
*/
-
-void *dm_multisnap_map_bitmap(struct dm_exception_store *s, bitmap_t bitmap, struct dm_buffer **bp, chunk_t *block, struct path_element *path)
+void *dm_multisnap_map_bitmap(struct dm_exception_store *s, bitmap_t bitmap,
+ struct dm_buffer **bp, chunk_t *block, struct path_element *path)
{
__u64 *bmp;
unsigned idx;
@@ -184,14 +186,15 @@ void *dm_multisnap_map_bitmap(struct dm_exception_store *s, bitmap_t bitmap, str
bmp = dm_multisnap_read_block(s, blk, bp);
if (unlikely(!bmp)) {
/* error is already set in dm_multisnap_read_block */
- DMERR("dm_multisnap_map_bitmap: can't read bitmap at %llx (%llx), pointed to by %llx (%llx), depth %d/%d, index %llx",
- (unsigned long long)blk,
- (unsigned long long)dm_multisnap_remap_block(s, blk),
- (unsigned long long)parent,
- (unsigned long long)dm_multisnap_remap_block(s, parent),
- s->bitmap_depth - d,
- s->bitmap_depth,
- (unsigned long long)bitmap);
+ DMERR("dm_multisnap_map_bitmap: can't read bitmap at "
+ "%llx (%llx), pointed to by %llx (%llx), depth %d/%d, index %llx",
+ (unsigned long long)blk,
+ (unsigned long long)dm_multisnap_remap_block(s, blk),
+ (unsigned long long)parent,
+ (unsigned long long)dm_multisnap_remap_block(s, parent),
+ s->bitmap_depth - d,
+ s->bitmap_depth,
+ (unsigned long long)bitmap);
return NULL;
}
if (!d) {
@@ -200,7 +203,8 @@ void *dm_multisnap_map_bitmap(struct dm_exception_store *s, bitmap_t bitmap, str
return bmp;
}

- idx = (bitmap >> ((d - 1) * (s->chunk_shift - BYTES_PER_POINTER_SHIFT))) & ((s->chunk_size - 1) >> BYTES_PER_POINTER_SHIFT);
+ idx = (bitmap >> ((d - 1) * (s->chunk_shift - BYTES_PER_POINTER_SHIFT))) &
+ ((s->chunk_size - 1) >> BYTES_PER_POINTER_SHIFT);

if (unlikely(path != NULL)) {
path[s->bitmap_depth - d].block = blk;
@@ -221,7 +225,6 @@ void *dm_multisnap_map_bitmap(struct dm_exception_store *s, bitmap_t bitmap, str
* Find a free bit from "start" to "end" (in bits).
* If wide_search is nonzero, search for the whole free byte first.
*/
-
static int find_bit(const void *bmp, unsigned start, unsigned end, int wide_search)
{
const void *p;
@@ -258,7 +261,6 @@ ret_bit:
* to find the valid number of bits. Note that bits past s->dev_size are
* undefined, there can be anything, so we must not scan past this limit.
*/
-
static unsigned bitmap_limit(struct dm_exception_store *s, bitmap_t bmp)
{
if (bmp == (bitmap_t)(s->dev_size >> (s->chunk_shift + BITS_PER_BYTE_SHIFT)))
@@ -287,8 +289,8 @@ static unsigned bitmap_limit(struct dm_exception_store *s, bitmap_t bmp)
* This is similar to what ext[23] does, so I suppose it is tuned well enough
* that it won't fragment too much.
*/
-
-int dm_multisnap_alloc_blocks(struct dm_exception_store *s, chunk_t *results, unsigned n_blocks, int flags)
+int dm_multisnap_alloc_blocks(struct dm_exception_store *s, chunk_t *results,
+ unsigned n_blocks, int flags)
{
void *bmp;
struct dm_buffer *bp;
@@ -427,7 +429,8 @@ bp_release_return:
* block was created since last commit.
*/

-void *dm_multisnap_alloc_duplicate_block(struct dm_exception_store *s, chunk_t block, struct dm_buffer **bp, void *ptr)
+void *dm_multisnap_alloc_duplicate_block(struct dm_exception_store *s, chunk_t block,
+ struct dm_buffer **bp, void *ptr)
{
int r;
chunk_t new_chunk;
@@ -446,15 +449,16 @@ void *dm_multisnap_alloc_duplicate_block(struct dm_exception_store *s, chunk_t b
if (!data)
return NULL;

- return dm_multisnap_duplicate_block(s, block, new_chunk, CB_BITMAP_IDX_NONE, bp, NULL);
+ return dm_multisnap_duplicate_block(s, block, new_chunk,
+ CB_BITMAP_IDX_NONE, bp, NULL);
}

/*
* Allocate a new block and return its data. Return the block number in *result
* and buffer pointer in *bp.
*/
-
-void *dm_multisnap_alloc_make_block(struct dm_exception_store *s, chunk_t *result, struct dm_buffer **bp)
+void *dm_multisnap_alloc_make_block(struct dm_exception_store *s, chunk_t *result,
+ struct dm_buffer **bp)
{
int r = dm_multisnap_alloc_blocks(s, result, 1, 0);
if (unlikely(r < 0))
@@ -464,16 +468,16 @@ void *dm_multisnap_alloc_make_block(struct dm_exception_store *s, chunk_t *resul
}

/*
- * Free the block immediatelly. You must be careful with this function because
+ * Free the block immediately. You must be careful with this function because
* it doesn't follow log-structured protocol.
*
* It may be used only if
* - the blocks to free were allocated since last transactions.
- * - or from freelist management, that makes the blocks is already recorded in
+ * - or from freelist management, which means the blocks were already recorded in
* a freelist (thus it would be freed again in case of machine crash).
*/
-
-void dm_multisnap_free_blocks_immediate(struct dm_exception_store *s, chunk_t block, unsigned n_blocks)
+void dm_multisnap_free_blocks_immediate(struct dm_exception_store *s, chunk_t block,
+ unsigned n_blocks)
{
void *bmp;
struct dm_buffer *bp;
@@ -482,7 +486,9 @@ void dm_multisnap_free_blocks_immediate(struct dm_exception_store *s, chunk_t bl
return;

if (unlikely(block + n_blocks > s->dev_size)) {
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_free_block_immediate: freeing invalid blocks %llx, %x", (unsigned long long)block, n_blocks));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_free_block_immediate: freeing invalid blocks %llx, %x",
+ (unsigned long long)block, n_blocks));
return;
}

@@ -515,7 +521,6 @@ void dm_multisnap_free_blocks_immediate(struct dm_exception_store *s, chunk_t bl
* Flush tmp_remaps for bitmaps. Write the path from modified bitmaps to the
* root.
*/
-
void dm_multisnap_bitmap_finalize_tmp_remap(struct dm_exception_store *s, struct tmp_remap *tmp_remap)
{
chunk_t block;
@@ -533,7 +538,8 @@ void dm_multisnap_bitmap_finalize_tmp_remap(struct dm_exception_store *s, struct
* doesn't have to allocate anything.
*/
if (s->n_preallocated_blocks < s->bitmap_depth) {
- if (unlikely(dm_multisnap_alloc_blocks(s, s->preallocated_blocks + s->n_preallocated_blocks, s->bitmap_depth * 2 - s->n_preallocated_blocks, 0) < 0))
+ if (unlikely(dm_multisnap_alloc_blocks(s, s->preallocated_blocks + s->n_preallocated_blocks,
+ s->bitmap_depth * 2 - s->n_preallocated_blocks, 0) < 0))
return;
s->n_preallocated_blocks = s->bitmap_depth * 2;
}
@@ -579,5 +585,6 @@ void dm_multisnap_bitmap_finalize_tmp_remap(struct dm_exception_store *s, struct
s->bitmap_root = new_blockn;

skip_it:
- memmove(s->preallocated_blocks, s->preallocated_blocks + results_ptr, (s->n_preallocated_blocks -= results_ptr) * sizeof(chunk_t));
+ memmove(s->preallocated_blocks, s->preallocated_blocks + results_ptr,
+ (s->n_preallocated_blocks -= results_ptr) * sizeof(chunk_t));
}
diff --git a/drivers/md/dm-multisnap-blocks.c b/drivers/md/dm-multisnap-blocks.c
index 2b53cd7..8715ed9 100644
--- a/drivers/md/dm-multisnap-blocks.c
+++ b/drivers/md/dm-multisnap-blocks.c
@@ -11,13 +11,14 @@
/*
* Check that the block is valid.
*/
-
static int check_invalid(struct dm_exception_store *s, chunk_t block)
{
if (unlikely(block >= s->dev_size) ||
unlikely(block == SB_BLOCK) ||
unlikely(dm_multisnap_is_commit_block(s, block))) {
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("check_invalid: access to invalid part of the device: %llx, size %llx", (unsigned long long)block, (unsigned long long)s->dev_size));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("check_invalid: access to invalid part of the device: %llx, size %llx",
+ (unsigned long long)block, (unsigned long long)s->dev_size));
return 1;
}
return 0;
@@ -39,7 +40,6 @@ static struct tmp_remap *find_tmp_remap(struct dm_exception_store *s, chunk_t bl
/*
* Remap a block number according to tmp_remap table.
*/
-
chunk_t dm_multisnap_remap_block(struct dm_exception_store *s, chunk_t block)
{
struct tmp_remap *t;
@@ -55,8 +55,8 @@ chunk_t dm_multisnap_remap_block(struct dm_exception_store *s, chunk_t block)
*
* Do a possible block remapping according to tmp_remap table.
*/
-
-void *dm_multisnap_read_block(struct dm_exception_store *s, chunk_t block, struct dm_buffer **bp)
+void *dm_multisnap_read_block(struct dm_exception_store *s, chunk_t block,
+ struct dm_buffer **bp)
{
void *buf;
cond_resched();
@@ -71,7 +71,9 @@ void *dm_multisnap_read_block(struct dm_exception_store *s, chunk_t block, struc

buf = dm_bufio_read(s->bufio, block, bp);
if (unlikely(IS_ERR(buf))) {
- DM_MULTISNAP_SET_ERROR(s->dm, PTR_ERR(buf), ("dm_multisnap_read_block: error read chunk %llx", (unsigned long long)block));
+ DM_MULTISNAP_SET_ERROR(s->dm, PTR_ERR(buf),
+ ("dm_multisnap_read_block: error read chunk %llx",
+ (unsigned long long)block));
return NULL;
}
return buf;
@@ -90,7 +92,6 @@ struct uncommitted_record {
* This function is used for optimizations, if it returns 0
* it doesn't break correctness, it only degrades performance.
*/
-
int dm_multisnap_block_is_uncommitted(struct dm_exception_store *s, chunk_t block)
{
struct tmp_remap *t;
@@ -120,7 +121,6 @@ int dm_multisnap_block_is_uncommitted(struct dm_exception_store *s, chunk_t bloc
* We can't use non-failing allocation because it could deadlock (wait for some
* pages being written and that write could be directed through this driver).
*/
-
void dm_multisnap_block_set_uncommitted(struct dm_exception_store *s, chunk_t block)
{
struct uncommitted_record *ur;
@@ -131,7 +131,8 @@ void dm_multisnap_block_set_uncommitted(struct dm_exception_store *s, chunk_t bl
* __GFP_NOMEMALLOC makes it less aggressive if the allocator recurses
* into itself.
*/
- ur = kmalloc(sizeof(struct uncommitted_record), GFP_NOWAIT | __GFP_NOWARN | __GFP_NOMEMALLOC);
+ ur = kmalloc(sizeof(struct uncommitted_record),
+ GFP_NOWAIT | __GFP_NOWARN | __GFP_NOMEMALLOC);
if (!ur)
return;
ur->block = block;
@@ -142,14 +143,14 @@ void dm_multisnap_block_set_uncommitted(struct dm_exception_store *s, chunk_t bl
* Clear the register of uncommitted blocks. This is called on commit and
* on unload.
*/
-
void dm_multisnap_clear_uncommitted(struct dm_exception_store *s)
{
int i;
for (i = 0; i < UNCOMMITTED_BLOCK_HASH_SIZE; i++) {
struct hlist_head *h = &s->uncommitted_blocks[i];
while (!hlist_empty(h)) {
- struct uncommitted_record *ur = hlist_entry(h->first, struct uncommitted_record, hash);
+ struct uncommitted_record *ur =
+ hlist_entry(h->first, struct uncommitted_record, hash);
hlist_del(&ur->hash);
kfree(ur);
}
@@ -170,8 +171,9 @@ void dm_multisnap_clear_uncommitted(struct dm_exception_store *s)
* A block that needs to be freed is returned in to_free. If to_free is NULL,
* that block is freed immediatelly.
*/
-
-void *dm_multisnap_duplicate_block(struct dm_exception_store *s, chunk_t old_chunk, chunk_t new_chunk, bitmap_t bitmap_idx, struct dm_buffer **bp, chunk_t *to_free_ptr)
+void *dm_multisnap_duplicate_block(struct dm_exception_store *s, chunk_t old_chunk,
+ chunk_t new_chunk, bitmap_t bitmap_idx,
+ struct dm_buffer **bp, chunk_t *to_free_ptr)
{
chunk_t to_free_val;
void *buf;
@@ -188,14 +190,17 @@ void *dm_multisnap_duplicate_block(struct dm_exception_store *s, chunk_t old_chu
t = find_tmp_remap(s, old_chunk);
if (t) {
if (unlikely(t->bitmap_idx != bitmap_idx)) {
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_duplicate_block: bitmap_idx doesn't match, %X != %X", t->bitmap_idx, bitmap_idx));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_duplicate_block: bitmap_idx doesn't match, %X != %X",
+ t->bitmap_idx, bitmap_idx));
return NULL;
}
*to_free_ptr = t->new;
t->new = new_chunk;
} else {
if (unlikely(list_empty(&s->free_tmp_remaps))) {
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_duplicate_block: all remap blocks used"));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_duplicate_block: all remap blocks used"));
return NULL;
}
t = list_first_entry(&s->free_tmp_remaps, struct tmp_remap, list);
@@ -218,7 +223,9 @@ void *dm_multisnap_duplicate_block(struct dm_exception_store *s, chunk_t old_chu

buf = dm_bufio_read(s->bufio, new_chunk, bp);
if (IS_ERR(buf)) {
- DM_MULTISNAP_SET_ERROR(s->dm, PTR_ERR(buf), ("dm_multisnap_duplicate_block: error reading chunk %llx", (unsigned long long)new_chunk));
+ DM_MULTISNAP_SET_ERROR(s->dm, PTR_ERR(buf),
+ ("dm_multisnap_duplicate_block: error reading chunk %llx",
+ (unsigned long long)new_chunk));
return NULL;
}
return buf;
@@ -227,7 +234,6 @@ void *dm_multisnap_duplicate_block(struct dm_exception_store *s, chunk_t old_chu
/*
* Remove an entry from tmp_remap table.
*/
-
void dm_multisnap_free_tmp_remap(struct dm_exception_store *s, struct tmp_remap *t)
{
list_del(&t->list);
@@ -241,8 +247,8 @@ void dm_multisnap_free_tmp_remap(struct dm_exception_store *s, struct tmp_remap
* It is expected that the caller fills all the data in the block, calls
* dm_bufio_mark_buffer_dirty and releases the buffer.
*/
-
-void *dm_multisnap_make_block(struct dm_exception_store *s, chunk_t new_chunk, struct dm_buffer **bp)
+void *dm_multisnap_make_block(struct dm_exception_store *s, chunk_t new_chunk,
+ struct dm_buffer **bp)
{
void *buf;

@@ -253,7 +259,9 @@ void *dm_multisnap_make_block(struct dm_exception_store *s, chunk_t new_chunk, s

buf = dm_bufio_new(s->bufio, new_chunk, bp);
if (unlikely(IS_ERR(buf))) {
- DM_MULTISNAP_SET_ERROR(s->dm, PTR_ERR(buf), ("dm_multisnap_make_block: error creating new block at chunk %llx", (unsigned long long)new_chunk));
+ DM_MULTISNAP_SET_ERROR(s->dm, PTR_ERR(buf),
+ ("dm_multisnap_make_block: error creating new block at chunk %llx",
+ (unsigned long long)new_chunk));
return NULL;
}
return buf;
@@ -262,7 +270,6 @@ void *dm_multisnap_make_block(struct dm_exception_store *s, chunk_t new_chunk, s
/*
* Free the given block and a possible tmp_remap shadow of it.
*/
-
void dm_multisnap_free_block_and_duplicates(struct dm_exception_store *s, chunk_t block)
{
struct tmp_remap *t;
@@ -281,7 +288,6 @@ void dm_multisnap_free_block_and_duplicates(struct dm_exception_store *s, chunk_
/*
* Return true if the block is a commit block.
*/
-
int dm_multisnap_is_commit_block(struct dm_exception_store *s, chunk_t block)
{
if (unlikely(block < FIRST_CB_BLOCK))
@@ -299,14 +305,13 @@ int dm_multisnap_is_commit_block(struct dm_exception_store *s, chunk_t block)
/*
* These two functions are used to avoid cycling on a corrupted device.
*
- * If the data on the device are corrupted, we mark the device as errorneous,
+ * If the data on the device is corrupted, we mark the device as errorneous,
* but we don't want to lockup the whole system. These functions help to achieve
* this goal.
*
* cy->count is the number of processed blocks.
* cy->key is the recorded block at last power-of-two count.
*/
-
void dm_multisnap_init_stop_cycles(struct stop_cycles *cy)
{
cy->key = 0;
@@ -316,7 +321,9 @@ void dm_multisnap_init_stop_cycles(struct stop_cycles *cy)
int dm_multisnap_stop_cycles(struct dm_exception_store *s, struct stop_cycles *cy, chunk_t key)
{
if (unlikely(cy->key == key) && unlikely(cy->count != 0)) {
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_stop_cycles: cycle detected at chunk %llx", (unsigned long long)key));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_stop_cycles: cycle detected at chunk %llx",
+ (unsigned long long)key));
return -1;
}
cy->count++;
diff --git a/drivers/md/dm-multisnap-btree.c b/drivers/md/dm-multisnap-btree.c
index 722d842..a7e3b60 100644
--- a/drivers/md/dm-multisnap-btree.c
+++ b/drivers/md/dm-multisnap-btree.c
@@ -12,8 +12,9 @@
* Read one btree node and do basic consistency checks.
* Any btree access should be done with this function.
*/
-
-static struct dm_multisnap_bt_node *dm_multisnap_read_btnode(struct dm_exception_store *s, int depth, chunk_t block, unsigned want_entries, struct dm_buffer **bp)
+static struct dm_multisnap_bt_node *
+dm_multisnap_read_btnode(struct dm_exception_store *s, int depth,
+ chunk_t block, unsigned want_entries, struct dm_buffer **bp)
{
struct dm_multisnap_bt_node *node;

@@ -25,17 +26,21 @@ static struct dm_multisnap_bt_node *dm_multisnap_read_btnode(struct dm_exception

if (unlikely(node->signature != BT_SIGNATURE)) {
dm_bufio_release(*bp);
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_read_btnode: bad signature on btree node %llx", (unsigned long long)block));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_read_btnode: bad signature on btree node %llx",
+ (unsigned long long)block));
return NULL;
}

if (unlikely((unsigned)(le32_to_cpu(node->n_entries) - 1) >= s->btree_entries) ||
(want_entries && unlikely(le32_to_cpu(node->n_entries) != want_entries))) {
dm_bufio_release(*bp);
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_read_btnode: bad number of entries in btree node %llx: %x, wanted %x",
- (unsigned long long)block,
- le32_to_cpu(node->n_entries),
- want_entries));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_read_btnode: bad number of entries in btree node "
+ "%llx: %x, wanted %x",
+ (unsigned long long)block,
+ le32_to_cpu(node->n_entries),
+ want_entries));
return NULL;
}

@@ -49,7 +54,6 @@ static struct dm_multisnap_bt_node *dm_multisnap_read_btnode(struct dm_exception
* with bits 32-47 set, so that the store could be read on a system with
* 64-bit chunk_t.
*/
-
static void write_orig_chunk(struct dm_multisnap_bt_entry *be, chunk_t n)
{
write_48(be, orig_chunk, n);
@@ -61,10 +65,11 @@ static void write_orig_chunk(struct dm_multisnap_bt_entry *be, chunk_t n)
* Add an entry (key, new_chunk) at an appropriate index to the btree node.
* Move the existing entries
*/
-
-static void add_at_idx(struct dm_multisnap_bt_node *node, unsigned index, struct bt_key *key, chunk_t new_chunk)
+static void add_at_idx(struct dm_multisnap_bt_node *node, unsigned index,
+ struct bt_key *key, chunk_t new_chunk)
{
- memmove(&node->entries[index + 1], &node->entries[index], (le32_to_cpu(node->n_entries) - index) * sizeof(struct dm_multisnap_bt_entry));
+ memmove(&node->entries[index + 1], &node->entries[index],
+ (le32_to_cpu(node->n_entries) - index) * sizeof(struct dm_multisnap_bt_entry));
write_orig_chunk(&node->entries[index], key->chunk);
write_48(&node->entries[index], new_chunk, new_chunk);
node->entries[index].snap_from = cpu_to_mikulas_snapid(key->snap_from);
@@ -77,7 +82,6 @@ static void add_at_idx(struct dm_multisnap_bt_node *node, unsigned index, struct
* Create an initial btree.
* (*writing_block) is updated to point after the btree.
*/
-
void dm_multisnap_create_btree(struct dm_exception_store *s, chunk_t *writing_block)
{
struct dm_buffer *bp;
@@ -88,13 +92,16 @@ void dm_multisnap_create_btree(struct dm_exception_store *s, chunk_t *writing_bl
(*writing_block)++;

if (*writing_block >= s->dev_size) {
- DM_MULTISNAP_SET_ERROR(s->dm, -ENOSPC, ("dm_multisnap_create_btree: device is too small"));
+ DM_MULTISNAP_SET_ERROR(s->dm, -ENOSPC,
+ ("dm_multisnap_create_btree: device is too small"));
return;
}

node = dm_bufio_new(s->bufio, *writing_block, &bp);
if (IS_ERR(node)) {
- DM_MULTISNAP_SET_ERROR(s->dm, PTR_ERR(node), ("dm_multisnap_create_btree: 't create direct bitmap block at %llx", (unsigned long long)*writing_block));
+ DM_MULTISNAP_SET_ERROR(s->dm, PTR_ERR(node),
+ ("dm_multisnap_create_btree: 't create direct bitmap block at %llx",
+ (unsigned long long)*writing_block));
return;
}
memset(node, 0, s->chunk_size);
@@ -123,7 +130,6 @@ void dm_multisnap_create_btree(struct dm_exception_store *s, chunk_t *writing_bl
* 0: the entry matches the key (both entry and key have ranges, a match
* is returned when the ranges overlap)
*/
-
static int compare_key(struct dm_multisnap_bt_entry *e, struct bt_key *key)
{
chunk_t orig_chunk = read_48(e, orig_chunk);
@@ -146,8 +152,8 @@ static int compare_key(struct dm_multisnap_bt_entry *e, struct bt_key *key)
* *result - if found, then the first entry in the requested range
* - if not found, then the first entry after the requested range
*/
-
-static int binary_search(struct dm_multisnap_bt_node *node, struct bt_key *key, unsigned *result)
+static int binary_search(struct dm_multisnap_bt_node *node, struct bt_key *key,
+ unsigned *result)
{
int c;
int first = 0;
@@ -182,8 +188,9 @@ static int binary_search(struct dm_multisnap_bt_node *node, struct bt_key *key,
* this node is returned (the buffer must be released with
* dm_bufio_release). Also, path with s->bt_depth entries is returned.
*/
-
-static int walk_btree(struct dm_exception_store *s, struct bt_key *key, struct dm_multisnap_bt_node **nodep, struct dm_buffer **bp, struct path_element path[MAX_BT_DEPTH])
+static int walk_btree(struct dm_exception_store *s, struct bt_key *key,
+ struct dm_multisnap_bt_node **nodep, struct dm_buffer **bp,
+ struct path_element path[MAX_BT_DEPTH])
{
#define node (*nodep)
int r;
@@ -212,16 +219,19 @@ static int walk_btree(struct dm_exception_store *s, struct bt_key *key, struct d
if (unlikely(last_chunk != want_last_chunk) ||
unlikely(last_snapid != want_last_snapid)) {
dm_bufio_release(*bp);
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("walk_btree: invalid last entry in node %llx/%llx: last_chunk %llx, want_last_chunk %llx, last_snapid: %llx, want_last_snapid: %llx, searching for %llx, %llx-%llx",
- (unsigned long long)block,
- (unsigned long long)dm_multisnap_remap_block(s, block),
- (unsigned long long)last_chunk,
- (unsigned long long)want_last_chunk,
- (unsigned long long)last_snapid,
- (unsigned long long)want_last_snapid,
- (unsigned long long)key->chunk,
- (unsigned long long)key->snap_from,
- (unsigned long long)key->snap_to));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("walk_btree: invalid last entry in node %llx/%llx: "
+ "last_chunk %llx, want_last_chunk %llx, last_snapid: %llx, "
+ "want_last_snapid: %llx, searching for %llx, %llx-%llx",
+ (unsigned long long)block,
+ (unsigned long long)dm_multisnap_remap_block(s, block),
+ (unsigned long long)last_chunk,
+ (unsigned long long)want_last_chunk,
+ (unsigned long long)last_snapid,
+ (unsigned long long)want_last_snapid,
+ (unsigned long long)key->chunk,
+ (unsigned long long)key->snap_from,
+ (unsigned long long)key->snap_to));
return -1;
}

@@ -248,8 +258,8 @@ static int walk_btree(struct dm_exception_store *s, struct bt_key *key, struct d
* In case the node is found, key contains updated key and result contains
* the resulting chunk.
*/
-
-int dm_multisnap_find_in_btree(struct dm_exception_store *s, struct bt_key *key, chunk_t *result)
+int dm_multisnap_find_in_btree(struct dm_exception_store *s, struct bt_key *key,
+ chunk_t *result)
{
struct dm_multisnap_bt_node *node;
struct path_element path[MAX_BT_DEPTH];
@@ -278,8 +288,10 @@ int dm_multisnap_find_in_btree(struct dm_exception_store *s, struct bt_key *key,
* When the whole tree is scanned, return 0.
* On error, return -1.
*/
-
-int dm_multisnap_list_btree(struct dm_exception_store *s, struct bt_key *key, int (*call)(struct dm_exception_store *, struct dm_multisnap_bt_node *, struct dm_multisnap_bt_entry *, void *), void *cookie)
+int dm_multisnap_list_btree(struct dm_exception_store *s, struct bt_key *key,
+ int (*call)(struct dm_exception_store *, struct dm_multisnap_bt_node *,
+ struct dm_multisnap_bt_entry *, void *),
+ void *cookie)
{
struct dm_multisnap_bt_node *node;
struct path_element path[MAX_BT_DEPTH];
@@ -305,7 +317,8 @@ list_next_node:

for (depth = s->bt_depth - 2; depth >= 0; depth--) {
int idx;
- node = dm_multisnap_read_btnode(s, depth, path[depth].block, path[depth].n_entries, &bp);
+ node = dm_multisnap_read_btnode(s, depth, path[depth].block,
+ path[depth].n_entries, &bp);
if (!node)
return -1;
idx = path[depth].idx + 1;
@@ -313,9 +326,10 @@ list_next_node:
r = compare_key(&node->entries[idx], key);
if (unlikely(r <= 0)) {
dm_bufio_release(bp);
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_list_btree: non-monotonic btree: node %llx, index %x",
- (unsigned long long)path[depth].block,
- idx));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_list_btree: non-monotonic btree: node "
+ "%llx, index %x",
+ (unsigned long long)path[depth].block, idx));
return 0;
}
path[depth].idx = idx;
@@ -359,10 +373,12 @@ void dm_multisnap_add_to_btree(struct dm_exception_store *s, struct bt_key *key,
if (unlikely(r)) {
if (r > 0) {
dm_bufio_release(bp);
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_add_to_btree: adding key that already exists: %llx, %llx-%llx",
- (unsigned long long)key->chunk,
- (unsigned long long)key->snap_from,
- (unsigned long long)key->snap_to));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_add_to_btree: adding key that already exists: "
+ "%llx, %llx-%llx",
+ (unsigned long long)key->chunk,
+ (unsigned long long)key->snap_from,
+ (unsigned long long)key->snap_to));
}
return;
}
@@ -392,9 +408,11 @@ go_up:
cond_resched();
memcpy(node, s->tmp_chunk, sizeof(struct dm_multisnap_bt_node));
cond_resched();
- memcpy((char *)node + sizeof(struct dm_multisnap_bt_node), (char *)s->tmp_chunk + split_offset, split_size - split_offset);
+ memcpy((char *)node + sizeof(struct dm_multisnap_bt_node),
+ (char *)s->tmp_chunk + split_offset, split_size - split_offset);
cond_resched();
- memset((char *)node + sizeof(struct dm_multisnap_bt_node) + split_size - split_offset, 0, s->chunk_size - (sizeof(struct dm_multisnap_bt_node) + split_size - split_offset));
+ memset((char *)node + sizeof(struct dm_multisnap_bt_node) + split_size - split_offset, 0,
+ s->chunk_size - (sizeof(struct dm_multisnap_bt_node) + split_size - split_offset));
cond_resched();
node->n_entries = cpu_to_le32(split_entries - split_index);

@@ -423,14 +441,16 @@ go_up:
dm_bufio_release(bp);

if (depth--) {
- node = dm_multisnap_read_btnode(s, depth, path[depth].block, path[depth].n_entries, &bp);
+ node = dm_multisnap_read_btnode(s, depth, path[depth].block,
+ path[depth].n_entries, &bp);
if (unlikely(!node))
return;
goto go_up;
}

if (s->bt_depth >= MAX_BT_DEPTH) {
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_add_to_btree: max b+-tree depth reached"));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_add_to_btree: max b+-tree depth reached"));
return;
}

@@ -459,8 +479,10 @@ go_up:
* Change the last entry from old_chunk/old_snapid to new_chunk/new_snapid.
* Start at a given depth and go upward to the root.
*/
-
-static void dm_multisnap_fixup_backlimits(struct dm_exception_store *s, struct path_element path[MAX_BT_DEPTH], int depth, chunk_t old_chunk, mikulas_snapid_t old_snapid, chunk_t new_chunk, mikulas_snapid_t new_snapid)
+static void dm_multisnap_fixup_backlimits(struct dm_exception_store *s,
+ struct path_element path[MAX_BT_DEPTH], int depth,
+ chunk_t old_chunk, mikulas_snapid_t old_snapid,
+ chunk_t new_chunk, mikulas_snapid_t new_snapid)
{
int idx;
struct dm_multisnap_bt_node *node;
@@ -470,7 +492,8 @@ static void dm_multisnap_fixup_backlimits(struct dm_exception_store *s, struct p
return;

for (depth--; depth >= 0; depth--) {
- node = dm_multisnap_read_btnode(s, depth, path[depth].block, path[depth].n_entries, &bp);
+ node = dm_multisnap_read_btnode(s, depth, path[depth].block,
+ path[depth].n_entries, &bp);
if (unlikely(!node))
return;

@@ -484,14 +507,17 @@ static void dm_multisnap_fixup_backlimits(struct dm_exception_store *s, struct p
unlikely(mikulas_snapid_to_cpu(node->entries[idx].snap_from) != old_snapid) ||
unlikely(mikulas_snapid_to_cpu(node->entries[idx].snap_to) != old_snapid)) {
dm_bufio_release(bp);
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_fixup_backlimits: btree limit does not match, block %llx, idx %x, orig_chunk %llx, snap_from %llx, snap_to %llx, want %llx, %llx",
- (unsigned long long)path[depth].block,
- idx,
- (unsigned long long)read_48(&node->entries[idx], orig_chunk),
- (unsigned long long)mikulas_snapid_to_cpu(node->entries[idx].snap_from),
- (unsigned long long)mikulas_snapid_to_cpu(node->entries[idx].snap_to),
- (unsigned long long)old_chunk,
- (unsigned long long)old_snapid));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_fixup_backlimits: btree limit does not match, block "
+ "%llx, idx %x, orig_chunk %llx, snap_from %llx, snap_to "
+ "%llx, want %llx, %llx",
+ (unsigned long long)path[depth].block,
+ idx,
+ (unsigned long long)read_48(&node->entries[idx], orig_chunk),
+ (unsigned long long)mikulas_snapid_to_cpu(node->entries[idx].snap_from),
+ (unsigned long long)mikulas_snapid_to_cpu(node->entries[idx].snap_to),
+ (unsigned long long)old_chunk,
+ (unsigned long long)old_snapid));
return;
}
write_48(&node->entries[idx], orig_chunk, new_chunk);
@@ -503,11 +529,12 @@ static void dm_multisnap_fixup_backlimits(struct dm_exception_store *s, struct p
if (path[depth].idx != path[depth].n_entries - 1)
return;
}
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_fixup_backlimits: the last entry modified, %llx/%llx -> %llx/%llx",
- (unsigned long long)old_chunk,
- (unsigned long long)old_snapid,
- (unsigned long long)new_chunk,
- (unsigned long long)new_snapid));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_fixup_backlimits: the last entry modified, %llx/%llx -> %llx/%llx",
+ (unsigned long long)old_chunk,
+ (unsigned long long)old_snapid,
+ (unsigned long long)new_chunk,
+ (unsigned long long)new_snapid));
}

/*
@@ -515,7 +542,6 @@ static void dm_multisnap_fixup_backlimits(struct dm_exception_store *s, struct p
* The key must have the same beginning or end as some existing entry (not both)
* The range of the key is excluded from the entry.
*/
-
void dm_multisnap_restrict_btree_entry(struct dm_exception_store *s, struct bt_key *key)
{
struct dm_multisnap_bt_node *node;
@@ -531,10 +557,11 @@ void dm_multisnap_restrict_btree_entry(struct dm_exception_store *s, struct bt_k

if (!r) {
dm_bufio_release(bp);
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_restrict_btree_entry: unknown key: %llx, %llx-%llx",
- (unsigned long long)key->chunk,
- (unsigned long long)key->snap_from,
- (unsigned long long)key->snap_to));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_restrict_btree_entry: unknown key: %llx, %llx-%llx",
+ (unsigned long long)key->chunk,
+ (unsigned long long)key->snap_from,
+ (unsigned long long)key->snap_to));
return;
}

@@ -553,12 +580,14 @@ void dm_multisnap_restrict_btree_entry(struct dm_exception_store *s, struct bt_k
entry->snap_to = cpu_to_mikulas_snapid(new_to);
} else {
dm_bufio_release(bp);
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_restrict_btree_entry: invali range to restruct: %llx, %llx-%llx %llx-%llx",
- (unsigned long long)key->chunk,
- (unsigned long long)from,
- (unsigned long long)to,
- (unsigned long long)key->snap_from,
- (unsigned long long)key->snap_to));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_restrict_btree_entry: invali range to restruct: "
+ "%llx, %llx-%llx %llx-%llx",
+ (unsigned long long)key->chunk,
+ (unsigned long long)from,
+ (unsigned long long)to,
+ (unsigned long long)key->snap_from,
+ (unsigned long long)key->snap_to));
return;
}

@@ -566,14 +595,14 @@ void dm_multisnap_restrict_btree_entry(struct dm_exception_store *s, struct bt_k
dm_bufio_release(bp);

if (unlikely(idx == path[s->bt_depth - 1].n_entries - 1))
- dm_multisnap_fixup_backlimits(s, path, s->bt_depth - 1, key->chunk, to, key->chunk, new_to);
+ dm_multisnap_fixup_backlimits(s, path, s->bt_depth - 1,
+ key->chunk, to, key->chunk, new_to);
}

/*
* Expand range of an existing btree entry.
* The key represents the whole new range (including the old and new part).
*/
-
void dm_multisnap_extend_btree_entry(struct dm_exception_store *s, struct bt_key *key)
{
struct dm_multisnap_bt_node *node;
@@ -589,14 +618,17 @@ void dm_multisnap_extend_btree_entry(struct dm_exception_store *s, struct bt_key

if (!r) {
dm_bufio_release(bp);
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_extend_btree_entry: unknown key: %llx, %llx-%llx",
- (unsigned long long)key->chunk,
- (unsigned long long)key->snap_from,
- (unsigned long long)key->snap_to));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_extend_btree_entry: unknown key: "
+ "%llx, %llx-%llx",
+ (unsigned long long)key->chunk,
+ (unsigned long long)key->snap_from,
+ (unsigned long long)key->snap_to));
return;
}

- node = dm_multisnap_alloc_duplicate_block(s, path[s->bt_depth - 1].block, &bp, node);
+ node = dm_multisnap_alloc_duplicate_block(s, path[s->bt_depth - 1].block,
+ &bp, node);
if (unlikely(!node))
return;

@@ -615,13 +647,13 @@ void dm_multisnap_extend_btree_entry(struct dm_exception_store *s, struct bt_key
dm_bufio_release(bp);

if (unlikely(idx == path[s->bt_depth - 1].n_entries - 1))
- dm_multisnap_fixup_backlimits(s, path, s->bt_depth - 1, key->chunk, to, key->chunk, new_to);
+ dm_multisnap_fixup_backlimits(s, path, s->bt_depth - 1,
+ key->chunk, to, key->chunk, new_to);
}

/*
* Delete an entry from the btree.
*/
-
void dm_multisnap_delete_from_btree(struct dm_exception_store *s, struct bt_key *key)
{
struct dm_multisnap_bt_node *node;
@@ -642,10 +674,11 @@ void dm_multisnap_delete_from_btree(struct dm_exception_store *s, struct bt_key

if (unlikely(!r)) {
dm_bufio_release(bp);
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_delete_from_btree: unknown key: %llx, %llx-%llx",
- (unsigned long long)key->chunk,
- (unsigned long long)key->snap_from,
- (unsigned long long)key->snap_to));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_delete_from_btree: unknown key: %llx, %llx-%llx",
+ (unsigned long long)key->chunk,
+ (unsigned long long)key->snap_from,
+ (unsigned long long)key->snap_to));
return;
}

@@ -657,24 +690,28 @@ void dm_multisnap_delete_from_btree(struct dm_exception_store *s, struct bt_key
to = mikulas_snapid_to_cpu(entry->snap_to);
if (unlikely(from != key->snap_from) || unlikely(to != key->snap_to)) {
dm_bufio_release(bp);
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_restrict_btree: invali range to restruct: %llx, %llx-%llx %llx-%llx",
- (unsigned long long)key->chunk,
- (unsigned long long)from,
- (unsigned long long)to,
- (unsigned long long)key->snap_from,
- (unsigned long long)key->snap_to));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_delete_from_btree: invalid range to restrict: "
+ "%llx, %llx-%llx %llx-%llx",
+ (unsigned long long)key->chunk,
+ (unsigned long long)from,
+ (unsigned long long)to,
+ (unsigned long long)key->snap_from,
+ (unsigned long long)key->snap_to));
return;
}

while (unlikely((n_entries = le32_to_cpu(node->n_entries)) == 1)) {
dm_bufio_release(bp);
if (unlikely(!depth)) {
- DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR, ("dm_multisnap_restrict_btree: b-tree is empty"));
+ DM_MULTISNAP_SET_ERROR(s->dm, -EFSERROR,
+ ("dm_multisnap_delete_from_btree: b-tree is empty"));
return;
}
dm_multisnap_free_block_and_duplicates(s, path[depth].block);
depth--;
- node = dm_multisnap_read_btnode(s, depth, path[depth].block, path[depth].n_entries, &bp);
+ node = dm_multisnap_read_btnode(s, depth, path[depth].block,
+ path[depth].n_entries, &bp);
if (!node)
return;
}
@@ -686,7 +723,8 @@ void dm_multisnap_delete_from_btree(struct dm_exception_store *s, struct bt_key
idx = path[depth].idx;

cond_resched();
- memmove(node->entries + idx, node->entries + idx + 1, (n_entries - idx - 1) * sizeof(struct dm_multisnap_bt_entry));
+ memmove(node->entries + idx, node->entries + idx + 1,
+ (n_entries - idx - 1) * sizeof(struct dm_multisnap_bt_entry));
cond_resched();
n_entries--;
memset(node->entries + n_entries, 0, sizeof(struct dm_multisnap_bt_entry));
@@ -701,7 +739,9 @@ void dm_multisnap_delete_from_btree(struct dm_exception_store *s, struct bt_key
dm_bufio_release(bp);

if (unlikely(idx == n_entries))
- dm_multisnap_fixup_backlimits(s, path, depth, key->chunk, key->snap_to, last_one_chunk, last_one_snap_to);
+ dm_multisnap_fixup_backlimits(s, path, depth, key->chunk,
+ key->snap_to, last_one_chunk,
+ last_one_snap_to);
}

/*
@@ -709,8 +749,8 @@ void dm_multisnap_delete_from_btree(struct dm_exception_store *s, struct bt_key
* Find the whole path for tmp_remap and write the path as new entries, from
* the root.
*/
-
-void dm_multisnap_bt_finalize_tmp_remap(struct dm_exception_store *s, struct tmp_remap *tmp_remap)
+void dm_multisnap_bt_finalize_tmp_remap(struct dm_exception_store *s,
+ struct tmp_remap *tmp_remap)
{
struct dm_buffer *bp;
struct dm_multisnap_bt_node *node;
@@ -723,7 +763,8 @@ void dm_multisnap_bt_finalize_tmp_remap(struct dm_exception_store *s, struct tmp
int i;

if (s->n_preallocated_blocks < s->bt_depth) {
- if (dm_multisnap_alloc_blocks(s, s->preallocated_blocks + s->n_preallocated_blocks, s->bt_depth - s->n_preallocated_blocks, 0) < 0)
+ if (dm_multisnap_alloc_blocks(s, s->preallocated_blocks + s->n_preallocated_blocks,
+ s->bt_depth - s->n_preallocated_blocks, 0) < 0)
return;
s->n_preallocated_blocks = s->bt_depth;
}
@@ -751,17 +792,16 @@ void dm_multisnap_bt_finalize_tmp_remap(struct dm_exception_store *s, struct tmp
goto found;

DMERR("block %llx/%llx was not found in btree when searching for %llx/%llx",
- (unsigned long long)tmp_remap->old,
- (unsigned long long)tmp_remap->new,
- (unsigned long long)key.chunk,
- (unsigned long long)key.snap_from);
+ (unsigned long long)tmp_remap->old,
+ (unsigned long long)tmp_remap->new,
+ (unsigned long long)key.chunk,
+ (unsigned long long)key.snap_from);
for (i = 0; i < s->bt_depth; i++)
DMERR("path[%d]: %llx/%x", i, (unsigned long long)path[i].block, path[i].idx);
dm_multisnap_set_error(s->dm, -EFSERROR);
return;

found:
-
dm_multisnap_free_block(s, tmp_remap->old, 0);

new_blockn = tmp_remap->new;
@@ -774,7 +814,8 @@ found:
remapped = 1;
dm_bufio_release_move(bp, s->preallocated_blocks[results_ptr]);
dm_multisnap_free_block_and_duplicates(s, path[i].block);
- node = dm_multisnap_read_btnode(s, i, s->preallocated_blocks[results_ptr], path[i].n_entries, &bp);
+ node = dm_multisnap_read_btnode(s, i, s->preallocated_blocks[results_ptr],
+ path[i].n_entries, &bp);
if (!node)
return;
dm_multisnap_block_set_uncommitted(s, s->preallocated_blocks[results_ptr]);
@@ -792,6 +833,6 @@ found:
s->bt_root = new_blockn;

skip_it:
- memmove(s->preallocated_blocks, s->preallocated_blocks + results_ptr, (s->n_preallocated_blocks -= results_ptr) * sizeof(chunk_t));
+ memmove(s->preallocated_blocks, s->preallocated_blocks + results_ptr,
+ (s->n_preallocated_blocks -= results_ptr) * sizeof(chunk_t));
}
-
diff --git a/drivers/md/dm-multisnap-commit.c b/drivers/md/dm-multisnap-commit.c
index f44f2e7..78b2583 100644
--- a/drivers/md/dm-multisnap-commit.c
+++ b/drivers/md/dm-multisnap-commit.c
@@ -11,7 +11,6 @@
/*
* Flush existing tmp_remaps.
*/
-
static void dm_multisnap_finalize_tmp_remaps(struct dm_exception_store *s)
{
struct tmp_remap *t;
@@ -26,21 +25,25 @@ static void dm_multisnap_finalize_tmp_remaps(struct dm_exception_store *s)
* if there are none, do bitmap remaps
*/
if (!list_empty(&s->used_bt_tmp_remaps)) {
- t = container_of(s->used_bt_tmp_remaps.next, struct tmp_remap, list);
+ t = container_of(s->used_bt_tmp_remaps.next,
+ struct tmp_remap, list);
dm_multisnap_bt_finalize_tmp_remap(s, t);
dm_multisnap_free_tmp_remap(s, t);
continue;
}
}

-/* else: 0 or 1 free remaps : finalize bitmaps */
+ /* else: 0 or 1 free remaps : finalize bitmaps */
if (!list_empty(&s->used_bitmap_tmp_remaps)) {
- t = container_of(s->used_bitmap_tmp_remaps.next, struct tmp_remap, list);
+ t = container_of(s->used_bitmap_tmp_remaps.next,
+ struct tmp_remap, list);
dm_multisnap_bitmap_finalize_tmp_remap(s, t);
dm_multisnap_free_tmp_remap(s, t
 
Old 03-02-2010, 01:56 PM
Mike Snitzer
 
Default mikulas' shared snapshot patches

On Mon, Mar 01 2010 at 7:23pm -0500,
Mike Snitzer <snitzer@redhat.com> wrote:

> Mikulas,
>
> This is just the full submit of your shared snapshot patches from:
> http://people.redhat.com/mpatocka/patches/kernel/new-snapshots/r15/
>
> I think the next phase of review should possibly be driven through the
> dm-devel mailing list. I'd at least like the option of exchanging
> mail on aspects of some of these patches.
>
> The first patch has one small cleanup in do_origin_write(): I
> eliminated the 'midcycle' goto.
>
> But the primary difference with this submission (when compared to your
> r15 patches) is I editted the patches for whitespace and typos.

Mikulas,

I've made these patches available for download here:
http://people.redhat.com/msnitzer/patches/multisnap/kernel/2.6.33/

I'd like to hand these DM patches, and the lvm2 patches, over to you so
we don't get out of sync.

The lvm2 patches were rebased and fixed to work with lvm2 2.02.62:
http://people.redhat.com/msnitzer/patches/multisnap/lvm2/LVM2-2.02.62/

I've not heard from you on either line of work that I did here. I
welcome your feedback.

Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 

Thread Tools




All times are GMT. The time now is 07:18 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org