FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

 
 
LinkBack Thread Tools
 
Old 08-18-2011, 09:57 PM
Mikulas Patocka
 
Default dm: lower bio-based reservation

Hi

When we talked about those reservations, I realized that unusually high
reservations (256 entries) are done even for bio-based processing.

This patch lowers it to use just 16 entries for bio-based processing. It
shouldn't deadlock, I am not aware of any theoretical deadlock in
bio-based dm.

Request-based memory consumption cannot be easily reduced, it uses
GFP_ATOMIC to allocate all entries at once, and if it fails, request is
pushed back to the block layer. Thus it needs reservations to process at
least one complete request.

Mikulas

---

dm: lower bio-based reservation

Bio-based device mapper processing doesn't need large pools (in fact, just
one entry would be sufficient), so this patch lowers the number of reserved
entries for bio-base operation.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>

---
drivers/md/dm.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)

Index: linux-3.0-fast/drivers/md/dm.c
================================================== =================
--- linux-3.0-fast.orig/drivers/md/dm.c 2011-08-18 21:13:33.000000000 +0200
+++ linux-3.0-fast/drivers/md/dm.c 2011-08-18 21:14:41.000000000 +0200
@@ -197,7 +197,8 @@ struct dm_md_mempools {
struct bio_set *bs;
};

-#define MIN_IOS 256
+#define RESERVED_BIO_BASED_IOS 16
+#define RESERVED_REQUEST_BASED_IOS 256
static struct kmem_cache *_io_cache;
static struct kmem_cache *_tio_cache;
static struct kmem_cache *_rq_tio_cache;
@@ -2686,20 +2687,21 @@ EXPORT_SYMBOL_GPL(dm_noflush_suspending)
struct dm_md_mempools *dm_alloc_md_mempools(unsigned type, unsigned integrity)
{
struct dm_md_mempools *pools = kmalloc(sizeof(*pools), GFP_KERNEL);
- unsigned int pool_size = (type == DM_TYPE_BIO_BASED) ? 16 : MIN_IOS;
+ unsigned int pool_size = (type == DM_TYPE_BIO_BASED) ?
+ RESERVED_BIO_BASED_IOS : RESERVED_REQUEST_BASED_IOS;

if (!pools)
return NULL;

pools->io_pool = (type == DM_TYPE_BIO_BASED) ?
- mempool_create_slab_pool(MIN_IOS, _io_cache) :
- mempool_create_slab_pool(MIN_IOS, _rq_bio_info_cache);
+ mempool_create_slab_pool(pool_size, _io_cache) :
+ mempool_create_slab_pool(pool_size, _rq_bio_info_cache);
if (!pools->io_pool)
goto free_pools_and_out;

pools->tio_pool = (type == DM_TYPE_BIO_BASED) ?
- mempool_create_slab_pool(MIN_IOS, _tio_cache) :
- mempool_create_slab_pool(MIN_IOS, _rq_tio_cache);
+ mempool_create_slab_pool(pool_size, _tio_cache) :
+ mempool_create_slab_pool(pool_size, _rq_tio_cache);
if (!pools->tio_pool)
goto free_io_pool_and_out;


--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 08-18-2011, 10:34 PM
Mikulas Patocka
 
Default dm: lower bio-based reservation

> > Request-based memory consumption cannot be easily reduced, it uses
> > GFP_ATOMIC to allocate all entries at once, and if it fails, request is
> > pushed back to the block layer. Thus it needs reservations to process at
> > least one complete request.
>
> But couldn't the rq-based reserves be shared and simply protected by a
> lock?

I you don't need a special lock --- mempools are protected by a lock on
their own.

> We'd need a scheme where we provided a pool+lock per rq-based DM layer
> (for the conceptual possibility that we'd stack request-based DM devices
> even though in practice it isn't done).
>
> Being aware of how deep the stacking is isn't very elegant though
> (breaks DM abstraction).
>
> Not to mention, we'd also serialize cloned-request allocations (note it
> wouldn't serialize IO, just the allocations).
>
> Chances are it would hurt performance for heavy IO to many mpath
> devices.. so I'm not saying it is a wonderful solution. But can you
> guys think of variants of this shared pool scheme that might work?

If you don't stack rm-based devices, then a shared mempool may work.

It is prone to livelocks --- suppose continuous stream of IO to one device
blocking progress on another device. But if we suppose that that
continuous stream won't be infinite or that some memory is freed
eventually, it may work.

Mikulas

> Mike
>

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 

Thread Tools




All times are GMT. The time now is 10:09 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org