FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

 
 
LinkBack Thread Tools
 
Old 09-21-2012, 03:47 PM
Mike Snitzer
 
Default dm: gracefully fail any request beyond the end of the device

The access beyond the end of device BUG_ON that was introduced to
dm_request_fn via commit 29e4013de7ad950280e4b2208 ("dm: implement
REQ_FLUSH/FUA support for request-based dm") is an overly drastic
response. Use dm_kill_unmapped_request() to fail the clone and original
request with -EIO.

map_request() will assign the valid target returned by
dm_table_find_target to tio->ti. But in the case where the target
isn't valid tio->ti is never assigned (because map_request isn't
called); so add a check for tio->ti != NULL to dm_done().

Reported-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # v2.6.37+
---
drivers/md/dm.c | 23 ++++++++++++++++++-----
1 file changed, 18 insertions(+), 5 deletions(-)

v2: added a DMERR_LIMIT message to give context for the IO errors

Index: linux/drivers/md/dm.c
================================================== =================
--- linux.orig/drivers/md/dm.c
+++ linux/drivers/md/dm.c
@@ -865,7 +865,10 @@ static void dm_done(struct request *clon
{
int r = error;
struct dm_rq_target_io *tio = clone->end_io_data;
- dm_request_endio_fn rq_end_io = tio->ti->type->rq_end_io;
+ dm_request_endio_fn rq_end_io = NULL;
+
+ if (tio->ti)
+ rq_end_io = tio->ti->type->rq_end_io;

if (mapped && rq_end_io)
r = rq_end_io(tio->ti, clone, error, &tio->info);
@@ -1651,19 +1654,31 @@ static void dm_request_fn(struct request
if (!rq)
goto delay_and_out;

+ clone = rq->special;
+
/* always use block 0 to find the target for flushes for now */
pos = 0;
if (!(rq->cmd_flags & REQ_FLUSH))
pos = blk_rq_pos(rq);

ti = dm_table_find_target(map, pos);
- BUG_ON(!dm_target_is_valid(ti));
+ if (!dm_target_is_valid(ti)) {
+ /*
+ * Must perform setup, that dm_done() requires,
+ * before calling dm_kill_unmapped_request
+ */
+ DMERR_LIMIT("request attempted access beyond the end of device");
+ blk_start_request(rq);
+ atomic_inc(&md->pending[rq_data_dir(clone)]);
+ dm_get(md);
+ dm_kill_unmapped_request(clone, -EIO);
+ goto out;
+ }

if (ti->type->busy && ti->type->busy(ti))
goto delay_and_out;

blk_start_request(rq);
- clone = rq->special;
atomic_inc(&md->pending[rq_data_dir(clone)]);

spin_unlock(q->queue_lock);
@@ -1684,8 +1699,6 @@ delay_and_out:
blk_delay_queue(q, HZ / 10);
out:
dm_table_put(map);
-
- return;
}

int dm_underlying_device_busy(struct request_queue *q)

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-24-2012, 09:38 AM
"Jun'ichi Nomura"
 
Default dm: gracefully fail any request beyond the end of the device

On 09/22/12 00:47, Mike Snitzer wrote:
> @@ -1651,19 +1654,31 @@ static void dm_request_fn(struct request
> if (!rq)
> goto delay_and_out;
>
> + clone = rq->special;
> +
> /* always use block 0 to find the target for flushes for now */
> pos = 0;
> if (!(rq->cmd_flags & REQ_FLUSH))
> pos = blk_rq_pos(rq);
>
> ti = dm_table_find_target(map, pos);
> - BUG_ON(!dm_target_is_valid(ti));
> + if (!dm_target_is_valid(ti)) {
> + /*
> + * Must perform setup, that dm_done() requires,
> + * before calling dm_kill_unmapped_request
> + */
> + DMERR_LIMIT("request attempted access beyond the end of device");
> + blk_start_request(rq);
> + atomic_inc(&md->pending[rq_data_dir(clone)]);
> + dm_get(md);
> + dm_kill_unmapped_request(clone, -EIO);
> + goto out;

This "goto out" should be "continue" so that request_fn
process next requests in the queue.

Also I think introducing a function dm_start_request()
will make this part of code a little bit easier for reading.
An edited patch is attached.

--
Jun'ichi Nomura, NEC Corporation

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-24-2012, 01:07 PM
Mike Snitzer
 
Default dm: gracefully fail any request beyond the end of the device

On Mon, Sep 24 2012 at 5:38am -0400,
Jun'ichi Nomura <j-nomura@ce.jp.nec.com> wrote:

> On 09/22/12 00:47, Mike Snitzer wrote:
> > @@ -1651,19 +1654,31 @@ static void dm_request_fn(struct request
> > if (!rq)
> > goto delay_and_out;
> >
> > + clone = rq->special;
> > +
> > /* always use block 0 to find the target for flushes for now */
> > pos = 0;
> > if (!(rq->cmd_flags & REQ_FLUSH))
> > pos = blk_rq_pos(rq);
> >
> > ti = dm_table_find_target(map, pos);
> > - BUG_ON(!dm_target_is_valid(ti));
> > + if (!dm_target_is_valid(ti)) {
> > + /*
> > + * Must perform setup, that dm_done() requires,
> > + * before calling dm_kill_unmapped_request
> > + */
> > + DMERR_LIMIT("request attempted access beyond the end of device");
> > + blk_start_request(rq);
> > + atomic_inc(&md->pending[rq_data_dir(clone)]);
> > + dm_get(md);
> > + dm_kill_unmapped_request(clone, -EIO);
> > + goto out;
>
> This "goto out" should be "continue" so that request_fn
> process next requests in the queue.
>
> Also I think introducing a function dm_start_request()
> will make this part of code a little bit easier for reading.
> An edited patch is attached.

Aside from the continue, matches exactly what I was going to do for v3
(based on Mike Christie's feedback -- which was to introduce
dm_start_request too). Anyway, looks great.

I'll get a formal v3 posted so Alasdair can stage it.

Thanks,
Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-24-2012, 01:28 PM
Mike Snitzer
 
Default dm: gracefully fail any request beyond the end of the device

The access beyond the end of device BUG_ON that was introduced to
dm_request_fn via commit 29e4013de7ad950280e4b2208 ("dm: implement
REQ_FLUSH/FUA support for request-based dm") is an overly drastic
response. Use dm_kill_unmapped_request() to fail the clone and original
request with -EIO.

map_request() will assign the valid target returned by
dm_table_find_target to tio->ti. But in the case where the target
isn't valid tio->ti is never assigned (because map_request isn't
called); so add a check for tio->ti != NULL to dm_done().

Reported-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Cc: stable@vger.kernel.org # v2.6.37+
---
drivers/md/dm.c | 51 +++++++++++++++++++++++++++++++++++----------------
1 file changed, 35 insertions(+), 16 deletions(-)

v2: added a DMERR_LIMIT message to give context for the IO errors
v3: folded in Jun'ichi's changes: dm_start_request and continue

Index: linux-2.6/drivers/md/dm.c
================================================== =================
--- linux-2.6.orig/drivers/md/dm.c
+++ linux-2.6/drivers/md/dm.c
@@ -865,7 +865,10 @@ static void dm_done(struct request *clon
{
int r = error;
struct dm_rq_target_io *tio = clone->end_io_data;
- dm_request_endio_fn rq_end_io = tio->ti->type->rq_end_io;
+ dm_request_endio_fn rq_end_io = NULL;
+
+ if (tio->ti)
+ rq_end_io = tio->ti->type->rq_end_io;

if (mapped && rq_end_io)
r = rq_end_io(tio->ti, clone, error, &tio->info);
@@ -1588,15 +1591,6 @@ static int map_request(struct dm_target
int r, requeued = 0;
struct dm_rq_target_io *tio = clone->end_io_data;

- /*
- * Hold the md reference here for the in-flight I/O.
- * We can't rely on the reference count by device opener,
- * because the device may be closed during the request completion
- * when all bios are completed.
- * See the comment in rq_completed() too.
- */
- dm_get(md);
-
tio->ti = ti;
r = ti->type->map_rq(ti, clone, &tio->info);
switch (r) {
@@ -1628,6 +1622,26 @@ static int map_request(struct dm_target
return requeued;
}

+static struct request *dm_start_request(struct mapped_device *md, struct request *orig)
+{
+ struct request *clone;
+
+ blk_start_request(orig);
+ clone = orig->special;
+ atomic_inc(&md->pending[rq_data_dir(clone)]);
+
+ /*
+ * Hold the md reference here for the in-flight I/O.
+ * We can't rely on the reference count by device opener,
+ * because the device may be closed during the request completion
+ * when all bios are completed.
+ * See the comment in rq_completed() too.
+ */
+ dm_get(md);
+
+ return clone;
+}
+
/*
* q->request_fn for request-based dm.
* Called with the queue lock held.
@@ -1657,14 +1671,21 @@ static void dm_request_fn(struct request
pos = blk_rq_pos(rq);

ti = dm_table_find_target(map, pos);
- BUG_ON(!dm_target_is_valid(ti));
+ if (!dm_target_is_valid(ti)) {
+ /*
+ * Must perform setup, that dm_done() requires,
+ * before calling dm_kill_unmapped_request
+ */
+ DMERR_LIMIT("request attempted access beyond the end of device");
+ clone = dm_start_request(md, rq);
+ dm_kill_unmapped_request(clone, -EIO);
+ continue;
+ }

if (ti->type->busy && ti->type->busy(ti))
goto delay_and_out;

- blk_start_request(rq);
- clone = rq->special;
- atomic_inc(&md->pending[rq_data_dir(clone)]);
+ clone = dm_start_request(md, rq);

spin_unlock(q->queue_lock);
if (map_request(ti, clone, md))
@@ -1684,8 +1705,6 @@ delay_and_out:
blk_delay_queue(q, HZ / 10);
out:
dm_table_put(map);
-
- return;
}

int dm_underlying_device_busy(struct request_queue *q)

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 

Thread Tools




All times are GMT. The time now is 03:32 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org