FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

 
 
LinkBack Thread Tools
 
Old 08-04-2008, 08:51 AM
Ryo Tsuruta
 
Default I/O bandwidth controller and BIO tracking

Hi everyone,

This series of patches of dm-ioband now includes "The bio tracking mechanism,"
which has been posted individually to this mailing list.
This makes it easy for anybody to control the I/O bandwidth even when
the I/O is one of delayed-write requests.
Have fun!

This series of patches consists of two parts:
1. dm-ioband
Dm-ioband is an I/O bandwidth controller implemented as a
device-mapper driver, which gives specified bandwidth to each job
running on the same physical device. A job is a group of processes
with the same pid or pgrp or uid or a virtual machine such as KVM
or Xen. A job can also be a cgroup by applying the bio-cgroup patch.
2. bio-cgroup
Bio-cgroup is a BIO tracking mechanism, which is implemented on
the cgroup memory subsystem. With the mechanism, it is able to
determine which cgroup each of bio belongs to, even when the bio
is one of delayed-write requests issued from a kernel thread
such as pdflush.

The above two parts have been posted individually to this mailing list
until now, but after this time we would release them all together.

[PATCH 1/7] dm-ioband: Patch of device-mapper driver
[PATCH 2/7] dm-ioband: Documentation of design overview, installation,
command reference and examples.
[PATCH 3/7] bio-cgroup: Introduction
[PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts
[PATCH 5/7] bio-cgroup: Remove a lot of "#ifdef"s
[PATCH 6/7] bio-cgroup: Implement the bio-cgroup
[PATCH 7/7] bio-cgroup: Add a cgroup support to dm-ioband

Please see the following site for more information:
Linux Block I/O Bandwidth Control Project
http://people.valinux.co.jp/~ryov/bwctl/

Thanks,
Ryo Tsuruta

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 08-12-2008, 12:31 PM
Ryo Tsuruta
 
Default I/O bandwidth controller and BIO tracking

Hi everyone,

Here are new releases of dm-ioband and bio-cgroup.

The major change from the previous version is that dm-ioband now supports
a device-mapper bvec merge function, which removes the restriction that
the device-mapper framework automatically splits a I/O request into
several small I/O requests. The size of I/O requests was limited to
PAGE_SIZE when the underlying device, such as software RAID, had its
own merge function. This restriction had ever been applied to all
device-mapper drivers, but it has been solved recently by introducing
the bvec merge function feature into device-mapper.

The release also includes a minor update of bio-cgroup, that removes
some unused code in bio_cgroup_move_task(), which is no longer necessary.

dm-ioband
Dm-ioband is an I/O bandwidth controller implemented as a
device-mapper driver, which gives specified bandwidth to each job
running on the same block device. A job is a group of processes
with the same pid or pgrp or uid or a virtual machine such as KVM
or Xen. A job can also be a cgroup by applying the bio-cgroup patch.

bio-cgroup
Bio-cgroup is a BIO tracking mechanism, which is implemented on the
cgroup memory subsystem. With the mechanism, it is able to determine
which cgroup each of bio belongs to, even when the bio is one of
delayed-write requests issued from a kernel thread such as pdflush.

The following is a list of patches:

[PATCH 1/7] dm-ioband: Patch of device-mapper driver
[PATCH 2/7] dm-ioband: Documentation of design overview, installation,
command reference and examples.
[PATCH 3/7] bio-cgroup: Introduction
[PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts
[PATCH 5/7] bio-cgroup: Remove a lot of "#ifdef"s
[PATCH 6/7] bio-cgroup: Implement the bio-cgroup
[PATCH 7/7] bio-cgroup: Add a cgroup support to dm-ioband

Please see the following site for more information:
Linux Block I/O Bandwidth Control Project
http://people.valinux.co.jp/~ryov/bwctl/

Thanks,
Ryo Tsuruta

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-24-2008, 10:10 AM
Ryo Tsuruta
 
Default I/O bandwidth controller and BIO tracking

Hi everyone,

These patchsets are the new releases of dm-ioband and bio-cgroup which
are ported to 2.6.27-rc5-mm1.

dm-ioband
Dm-ioband is an I/O bandwidth controller implemented as a
device-mapper driver, which gives specified bandwidth to each job
running on the same block device. A job is a group of processes
with the same pid or pgrp or uid or a virtual machine such as KVM
or Xen. A job can also be a cgroup by applying the bio-cgroup patch.

bio-cgroup
Bio-cgroup is a BIO tracking mechanism, which is implemented on the
cgroup memory subsystem. With the mechanism, it is able to determine
which cgroup each of bio belongs to, even when the bio is one of
delayed-write requests issued from a kernel thread such as pdflush.

The following is a list of patches:

[PATCH 1/8] dm-ioband: Patch of device-mapper driver
[PATCH 2/8] dm-ioband: Documentation of design overview, installation,
command reference and examples.
[PATCH 3/8] bio-cgroup: Introduction
[PATCH 4/8] bio-cgroup: Split the cgroup memory subsystem into two parts
[PATCH 5/8] bio-cgroup: Remove a lot of "#ifdef"s
[PATCH 6/8] bio-cgroup: Implement the bio-cgroup
[PATCH 7/8] bio-cgroup: Add a cgroup support to dm-ioband
[PATCH 8/8] bio-cgroup: Dirty page tracking

Please see the following site for more information:
Linux Block I/O Bandwidth Control Project
http://people.valinux.co.jp/~ryov/bwctl/

Thanks,
Ryo Tsuruta

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 11-13-2008, 02:10 AM
Ryo Tsuruta
 
Default I/O bandwidth controller and BIO tracking

Hi everyone,

This is a new release of dm-ioband and bio-cgroup. With this release,
the overhead of bio-cgroup is significantly reduced and the accuracy
of block I/O tracking is much improved. These patches are for
2.6.28-rc2-mm1.

Enjoy it!

dm-ioband
=========

Dm-ioband is an I/O bandwidth controller implemented as a
device-mapper driver, which gives specified bandwidth to each job
running on the same block device. A job is a group of processes
or a virtual machine such as KVM or Xen.
I/O throughput on dm-ioband is excellent not only on SATA storage
but on SDD, which as good as the one without dm-ioband.

Changes from the previous release:
- Fix a bug that create_workqueue() is called during spin lock
when creating a new ioband group.
- A new tunable parameter "carryover" is added, which specifies
how many tokens an ioband group can keep for the future use
when the group isn't so active.

TODO:
- Other policies to schedule BIOs.
- Policies which fits SSD.
e.g.)
- Guarantee response time.
- Guarantee throughput.
- Policies which fits Highend Storage or hardware raid storage.
- Some LUNs may share the same bandwidth.
- Support WRITE_BARRIER when the device-mapper layer supports it.
- Implement the algorithm of dm-ioband in the block I/O layer
experimentally.

bio-cgroup
==========

Bio-cgroup is a BIO tracking mechanism, which is implemented on the
cgroup memory subsystem. With the mechanism, it is able to determine
which cgroup each of bio belongs to, even when the bio is one of
delayed-write requests issued from a kernel thread such as pdflush.

Changes from the previous release:
- This release is a new implementation.
- This is based on the new design of the cgroup memory controller
framework, which pre-allocates all cgroup-page data structures to
reduce the overhead.
- The overhead to trace block I/O requests is much smaller than that
of the previous one. This is done by making every page have the id
of its corresponding bio-cgroup instead of the pointer to it and
most of spin-locks and atomic operations are gone.
- This implementation uses only 4 bytes per page for I/O tracking
while the previous version uses 12 bytes on a 32 bit machine and 24
bytes on a 64 bit machine.
- The accuracy of I/O tracking is improved that it can trace I/O
requests even when the processes which issued these requests get
moved into another bio-cgroup.
- Support bounce buffers tracking. They will have the same bio-cgroup
owners as the original I/O requests.

TODO:
- Support to track I/O requests that will be generated in Linux
kernel, such as those of RAID0 and RAID5.

A list of patches
=================

The following is a list of patches:

[PATCH 0/8] I/O bandwidth controller and BIO tracking
[PATCH 1/8] dm-ioband: Introduction
[PATCH 2/8] dm-ioband: Source code and patch
[PATCH 3/8] dm-ioband: Document
[PATCH 4/8] bio-cgroup: Introduction
[PATCH 5/8] bio-cgroup: The new page_cgroup framework
[PATCH 6/8] bio-cgroup: The body of bio-cgroup
[PATCH 7/8] bio-cgroup: Page tracking hooks
[PATCH 8/8] bio-cgroup: Add a cgroup support to dm-ioband

Please see the following site for more information:
Linux Block I/O Bandwidth Control Project
http://people.valinux.co.jp/~ryov/bwctl/

Thanks,
Ryo Tsuruta

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 11-13-2008, 02:50 AM
Ryo Tsuruta
 
Default I/O bandwidth controller and BIO tracking

Hi Alasdair,

As you know, I have posted a new dm-ioband patch today. I hope you
will take quick look over the patch and reply some comments for me.
Would you tell me when you will/can do it?

I have been waiting for your reply.

Thanks,
Ryo Tsuruta

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 11-13-2008, 05:30 AM
KAMEZAWA Hiroyuki
 
Default I/O bandwidth controller and BIO tracking

On Thu, 13 Nov 2008 12:10:19 +0900 (JST)
Ryo Tsuruta <ryov@valinux.co.jp> wrote:

> Hi everyone,
>
> This is a new release of dm-ioband and bio-cgroup. With this release,
> the overhead of bio-cgroup is significantly reduced and the accuracy
> of block I/O tracking is much improved. These patches are for
> 2.6.28-rc2-mm1.
>

>From my point of view, a way to record bio_cgroup_id to page_cgroup is quite neat
and nice.

My concern is "bio_cgroup_id". It's provided only for bio_cgroup.
In this summer, I tried to add swap_cgroup_id only for mem+swap controller but
commenters said "please provide "id and lookup" in cgroup layer, it should be useful."
And I agree them. (and postponed it

Could you try "id" in cgroup layer ? How do you think, Paul and others ?

That's my only concern and if I/O controller people decides to live with
this bio tracking infrastracture,
==
page -> page_cgroup -> bio_cgroup_id
==
I have no objections. And enqueue necessary changes to my queue.

Thanks,
-Kame



> Enjoy it!
>
> dm-ioband
> =========
>
> Dm-ioband is an I/O bandwidth controller implemented as a
> device-mapper driver, which gives specified bandwidth to each job
> running on the same block device. A job is a group of processes
> or a virtual machine such as KVM or Xen.
> I/O throughput on dm-ioband is excellent not only on SATA storage
> but on SDD, which as good as the one without dm-ioband.
>
> Changes from the previous release:
> - Fix a bug that create_workqueue() is called during spin lock
> when creating a new ioband group.
> - A new tunable parameter "carryover" is added, which specifies
> how many tokens an ioband group can keep for the future use
> when the group isn't so active.
>
> TODO:
> - Other policies to schedule BIOs.
> - Policies which fits SSD.
> e.g.)
> - Guarantee response time.
> - Guarantee throughput.
> - Policies which fits Highend Storage or hardware raid storage.
> - Some LUNs may share the same bandwidth.
> - Support WRITE_BARRIER when the device-mapper layer supports it.
> - Implement the algorithm of dm-ioband in the block I/O layer
> experimentally.
>
> bio-cgroup
> ==========
>
> Bio-cgroup is a BIO tracking mechanism, which is implemented on the
> cgroup memory subsystem. With the mechanism, it is able to determine
> which cgroup each of bio belongs to, even when the bio is one of
> delayed-write requests issued from a kernel thread such as pdflush.
>
> Changes from the previous release:
> - This release is a new implementation.
> - This is based on the new design of the cgroup memory controller
> framework, which pre-allocates all cgroup-page data structures to
> reduce the overhead.
> - The overhead to trace block I/O requests is much smaller than that
> of the previous one. This is done by making every page have the id
> of its corresponding bio-cgroup instead of the pointer to it and
> most of spin-locks and atomic operations are gone.
> - This implementation uses only 4 bytes per page for I/O tracking
> while the previous version uses 12 bytes on a 32 bit machine and 24
> bytes on a 64 bit machine.
> - The accuracy of I/O tracking is improved that it can trace I/O
> requests even when the processes which issued these requests get
> moved into another bio-cgroup.
> - Support bounce buffers tracking. They will have the same bio-cgroup
> owners as the original I/O requests.
>
> TODO:
> - Support to track I/O requests that will be generated in Linux
> kernel, such as those of RAID0 and RAID5.
>
> A list of patches
> =================
>
> The following is a list of patches:
>
> [PATCH 0/8] I/O bandwidth controller and BIO tracking
> [PATCH 1/8] dm-ioband: Introduction
> [PATCH 2/8] dm-ioband: Source code and patch
> [PATCH 3/8] dm-ioband: Document
> [PATCH 4/8] bio-cgroup: Introduction
> [PATCH 5/8] bio-cgroup: The new page_cgroup framework
> [PATCH 6/8] bio-cgroup: The body of bio-cgroup
> [PATCH 7/8] bio-cgroup: Page tracking hooks
> [PATCH 8/8] bio-cgroup: Add a cgroup support to dm-ioband
>
> Please see the following site for more information:
> Linux Block I/O Bandwidth Control Project
> http://people.valinux.co.jp/~ryov/bwctl/
>
> Thanks,
> Ryo Tsuruta
>

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 

Thread Tools




All times are GMT. The time now is 07:26 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org