FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

 
 
LinkBack Thread Tools
 
Old 08-06-2008, 01:45 PM
 
Default bio-cgroup: Split the cgroup memory subsystem into two parts

----- Original Message -----
>> > This patch splits the cgroup memory subsystem into two parts.
>> > One is for tracking pages to find out the owners. The other is
>> > for controlling how much amount of memory should be assigned to
>> > each cgroup.
>> >
>> > With this patch, you can use the page tracking mechanism even if
>> > the memory subsystem is off.
>> >
>> > Based on 2.6.27-rc1-mm1
>> > Signed-off-by: Ryo Tsuruta <ryov@valinux.co.jp>
>> > Signed-off-by: Hirokazu Takahashi <taka@valinux.co.jp>
>> >
>>
>> Plese CC me or Balbir or Pavel (See Maintainer list) when you try this
>>
>> After this patch, the total structure is
>>
>> page <-> page_cgroup <-> bio_cgroup.
>> (multiple bio_cgroup can be attached to page_cgroup)
>>
>> Does this pointer chain will add
>> - significant performance regression or
>> - new race condtions
>> ?
>
>I don't think it will cause significant performance loss, because
>the link between a page and a page_cgroup has already existed, which
>the memory resource controller prepared. Bio_cgroup uses this as it is,
>and does nothing about this.
>
>And the link between page_cgroup and bio_cgroup isn't protected
>by any additional spin-locks, since the associated bio_cgroup is
>guaranteed to exist as long as the bio_cgroup owns pages.
>
Hmm, I think page_cgroup's cost is visible when
1. a page is changed to be in-use state. (fault or radixt-tree-insert)
2. a page is changed to be out-of-use state (fault or radixt-tree-removal)
3. memcg hit its limit or global LRU reclaim runs.

"1" and "2" can be catched as 5% loss of exec throuput.
"3" is not measured (because LRU walk itself is heavy.)

What new chances to access page_cgroup you'll add ?
I'll have to take into account them.

>I've just noticed that most of overhead comes from the spin-locks
>when reclaiming the pages inside mem_cgroups and the spin-locks to
>protect the links between pages and page_cgroups.
Overhead between page <-> page_cgroup lock is cannot be catched by
lock_stat now.Do you have numbers ?
But ok, there are too many locks ;(

>The latter overhead comes from the policy your team has chosen
>that page_cgroup structures are allocated on demand. I still feel
>this approach doesn't make any sense because linux kernel tries to
>make use of most of the pages as far as it can, so most of them
>have to be assigned its related page_cgroup. It would make us happy
>if page_cgroups are allocated at the booting time.
>
Now, multi-sizer-page-cache is discussed for a long time. If it's our
direction, on-demand page_cgroup make sense.


>> For example, adding a simple function.
>> ==
>> int get_page_io_id(struct page *)
>> - returns a I/O cgroup ID for this page. If ID is not found, -1 is returne
d.
>> ID is not guaranteed to be valid value. (ID can be obsolete)
>> ==
>> And just storing cgroup ID to page_cgroup at page allocation.
>> Then, making bio_cgroup independent from page_cgroup and
>> get ID if avialble and avoid too much pointer walking.
>
>I don't think there are any diffrences between a poiter and ID.
>I think this ID is just a encoded version of the pointer.
>
ID can be obsolete, pointer is not. memory cgroup has to take care of
bio cgroup's race condition ? (About race conditions, it's already complicated
enough)

To be honest, I think adding a new (4 or 8 bytes) page struct and record infor
mation of bio-control is more straightforward approach. Buy as you might
think, "there is no room"

Thanks,
-Kame

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-25-2008, 08:46 AM
Hirokazu Takahashi
 
Default bio-cgroup: Split the cgroup memory subsystem into two parts

Hi,

> I posted new ones.
> - http://lkml.org/lkml/2008/9/25/33
> (changes in page_cgroup) http://lkml.org/lkml/2008/9/25/56
> (I'm not sure this gets Ack or Nack. but direction will not change.)
>
> Then, please tell me if you have new troubles with new ones.
> Or if you have requests.
> Major changes are
>
> - page_cgroup.h is added.
> - lookup_page_cgroup(struct page*), lock_page_cgroup() etc.. is exported.
> - All page_cgroup are allocated at boot.
> - you can use atomic operation to modify page_cgroup->flags.

Good new!

> One concern from me to this bio_cgroup is that this increases size of
> +#ifdef CONFIG_CGROUP_BIO
> + struct list_head blist; /* for bio_cgroup page list */
> + struct bio_cgroup *bio_cgroup;
> +#endif
> struct page_cgroup...more 24bytes per 4096bytes.
> Could you reduce this ? I think 8bytes per memcg is reasonable.
> Can you move lru to bio itself ?

I have a plan on getting rid of the blist after your work is done,
whose design will depend on that all page_cgroups are preallocated.
I also think the size of bio_cgroup can be reduced if making bio_cgroup
contain a bio-cgroup ID instead of the pointer.

Just wait!

> This makes page_cgroup to be 64bytes from 40bytes and makes it larger
> than mem_map....
> After bio_cgroup, page_cgroup, allocated at boot, size on my 48GB box
> will jump up.
> 480MB -> 760MB.
>
> Thanks,
> -Kame


--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 

Thread Tools




All times are GMT. The time now is 09:57 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org