FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

 
 
LinkBack Thread Tools
 
Old 09-25-2012, 05:59 PM
Jens Axboe
 
Default Fix a crash when block device is read and block size is changed at the same time

On 2012-09-25 19:49, Jeff Moyer wrote:
> Jeff Moyer <jmoyer@redhat.com> writes:
>
>> Mikulas Patocka <mpatocka@redhat.com> writes:
>>
>>> Hi Jeff
>>>
>>> Thanks for testing.
>>>
>>> It would be interesting ... what happens if you take the patch 3, leave
>>> "struct percpu_rw_semaphore bd_block_size_semaphore" in "struct
>>> block_device", but remove any use of the semaphore from fs/block_dev.c? -
>>> will the performance be like unpatched kernel or like patch 3? It could be
>>> that the change in the alignment affects performance on your CPU too, just
>>> differently than on my CPU.
>>
>> It turns out to be exactly the same performance as with the 3rd patch
>> applied, so I guess it does have something to do with cache alignment.
>> Here is the patch (against vanilla) I ended up testing. Let me know if
>> I've botched it somehow.
>>
>> So, I next up I'll play similar tricks to what you did (padding struct
>> block_device in all kernels) to eliminate the differences due to
>> structure alignment and provide a clear picture of what the locking
>> effects are.
>
> After trying again with the same padding you used in the struct
> bdev_inode, I see no performance differences between any of the
> patches. I tried bumping up the number of threads to saturate the
> number of cpus on a single NUMA node on my hardware, but that resulted
> in lower IOPS to the device, and hence consumption of less CPU time.
> So, I believe my results to be inconclusive.
>
> After talking with Vivek about the problem, he had mentioned that it
> might be worth investigating whether bd_block_size could be protected
> using SRCU. I looked into it, and the one thing I couldn't reconcile is
> updating both the bd_block_size and the inode->i_blkbits at the same
> time. It would involve (afaiui) adding fields to both the inode and the
> block_device data structures and using rcu_assign_pointer and
> rcu_dereference to modify and access the fields, and both fields would
> need to protected by the same struct srcu_struct. I'm not sure whether
> that's a desirable approach. When I started to implement it, it got
> ugly pretty quickly. What do others think?
>
> For now, my preference is to get the full patch set in. I will continue
> to investigate the performance impact of the data structure size changes
> that I've been seeing.
>
> So, for the four patches:
>
> Acked-by: Jeff Moyer <jmoyer@redhat.com>
>
> Jens, can you have a look at the patch set? We are seeing problem
> reports of this in the wild[1][2].

I'll queue it up for 3.7. I can run my regular testing on the 8-way, it
has a nack for showing scaling problems very nicely in aio/dio. As long
as we're not adding per-inode cache line dirtying per IO (and the
per-cpu rw sem looks OK), then I don't think there's too much to worry
about.

--
Jens Axboe

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-25-2012, 06:11 PM
Jens Axboe
 
Default Fix a crash when block device is read and block size is changed at the same time

On 2012-09-25 19:59, Jens Axboe wrote:
> On 2012-09-25 19:49, Jeff Moyer wrote:
>> Jeff Moyer <jmoyer@redhat.com> writes:
>>
>>> Mikulas Patocka <mpatocka@redhat.com> writes:
>>>
>>>> Hi Jeff
>>>>
>>>> Thanks for testing.
>>>>
>>>> It would be interesting ... what happens if you take the patch 3, leave
>>>> "struct percpu_rw_semaphore bd_block_size_semaphore" in "struct
>>>> block_device", but remove any use of the semaphore from fs/block_dev.c? -
>>>> will the performance be like unpatched kernel or like patch 3? It could be
>>>> that the change in the alignment affects performance on your CPU too, just
>>>> differently than on my CPU.
>>>
>>> It turns out to be exactly the same performance as with the 3rd patch
>>> applied, so I guess it does have something to do with cache alignment.
>>> Here is the patch (against vanilla) I ended up testing. Let me know if
>>> I've botched it somehow.
>>>
>>> So, I next up I'll play similar tricks to what you did (padding struct
>>> block_device in all kernels) to eliminate the differences due to
>>> structure alignment and provide a clear picture of what the locking
>>> effects are.
>>
>> After trying again with the same padding you used in the struct
>> bdev_inode, I see no performance differences between any of the
>> patches. I tried bumping up the number of threads to saturate the
>> number of cpus on a single NUMA node on my hardware, but that resulted
>> in lower IOPS to the device, and hence consumption of less CPU time.
>> So, I believe my results to be inconclusive.
>>
>> After talking with Vivek about the problem, he had mentioned that it
>> might be worth investigating whether bd_block_size could be protected
>> using SRCU. I looked into it, and the one thing I couldn't reconcile is
>> updating both the bd_block_size and the inode->i_blkbits at the same
>> time. It would involve (afaiui) adding fields to both the inode and the
>> block_device data structures and using rcu_assign_pointer and
>> rcu_dereference to modify and access the fields, and both fields would
>> need to protected by the same struct srcu_struct. I'm not sure whether
>> that's a desirable approach. When I started to implement it, it got
>> ugly pretty quickly. What do others think?
>>
>> For now, my preference is to get the full patch set in. I will continue
>> to investigate the performance impact of the data structure size changes
>> that I've been seeing.
>>
>> So, for the four patches:
>>
>> Acked-by: Jeff Moyer <jmoyer@redhat.com>
>>
>> Jens, can you have a look at the patch set? We are seeing problem
>> reports of this in the wild[1][2].
>
> I'll queue it up for 3.7. I can run my regular testing on the 8-way, it
> has a nack for showing scaling problems very nicely in aio/dio. As long
> as we're not adding per-inode cache line dirtying per IO (and the
> per-cpu rw sem looks OK), then I don't think there's too much to worry
> about.

I take that back. The series doesn't apply to my current tree. Not too
unexpected, since it's some weeks old. But more importantly, please send
this is a "real" patch series. I don't want to see two implementations
of rw semaphores. I think it's perfectly fine to first do a regular rw
sem, then a last patch adding the cache friendly variant from Eric and
converting to that.

In other words, get rid of 3/4.

--
Jens Axboe

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-25-2012, 10:49 PM
Mikulas Patocka
 
Default Fix a crash when block device is read and block size is changed at the same time

On Tue, 25 Sep 2012, Jens Axboe wrote:

> On 2012-09-25 19:59, Jens Axboe wrote:
> > On 2012-09-25 19:49, Jeff Moyer wrote:
> >> Jeff Moyer <jmoyer@redhat.com> writes:
> >>
> >>> Mikulas Patocka <mpatocka@redhat.com> writes:
> >>>
> >>>> Hi Jeff
> >>>>
> >>>> Thanks for testing.
> >>>>
> >>>> It would be interesting ... what happens if you take the patch 3, leave
> >>>> "struct percpu_rw_semaphore bd_block_size_semaphore" in "struct
> >>>> block_device", but remove any use of the semaphore from fs/block_dev.c? -
> >>>> will the performance be like unpatched kernel or like patch 3? It could be
> >>>> that the change in the alignment affects performance on your CPU too, just
> >>>> differently than on my CPU.
> >>>
> >>> It turns out to be exactly the same performance as with the 3rd patch
> >>> applied, so I guess it does have something to do with cache alignment.
> >>> Here is the patch (against vanilla) I ended up testing. Let me know if
> >>> I've botched it somehow.
> >>>
> >>> So, I next up I'll play similar tricks to what you did (padding struct
> >>> block_device in all kernels) to eliminate the differences due to
> >>> structure alignment and provide a clear picture of what the locking
> >>> effects are.
> >>
> >> After trying again with the same padding you used in the struct
> >> bdev_inode, I see no performance differences between any of the
> >> patches. I tried bumping up the number of threads to saturate the
> >> number of cpus on a single NUMA node on my hardware, but that resulted
> >> in lower IOPS to the device, and hence consumption of less CPU time.
> >> So, I believe my results to be inconclusive.
> >>
> >> After talking with Vivek about the problem, he had mentioned that it
> >> might be worth investigating whether bd_block_size could be protected
> >> using SRCU. I looked into it, and the one thing I couldn't reconcile is
> >> updating both the bd_block_size and the inode->i_blkbits at the same
> >> time. It would involve (afaiui) adding fields to both the inode and the
> >> block_device data structures and using rcu_assign_pointer and
> >> rcu_dereference to modify and access the fields, and both fields would
> >> need to protected by the same struct srcu_struct. I'm not sure whether
> >> that's a desirable approach. When I started to implement it, it got
> >> ugly pretty quickly. What do others think?
> >>
> >> For now, my preference is to get the full patch set in. I will continue
> >> to investigate the performance impact of the data structure size changes
> >> that I've been seeing.
> >>
> >> So, for the four patches:
> >>
> >> Acked-by: Jeff Moyer <jmoyer@redhat.com>
> >>
> >> Jens, can you have a look at the patch set? We are seeing problem
> >> reports of this in the wild[1][2].
> >
> > I'll queue it up for 3.7. I can run my regular testing on the 8-way, it
> > has a nack for showing scaling problems very nicely in aio/dio. As long
> > as we're not adding per-inode cache line dirtying per IO (and the
> > per-cpu rw sem looks OK), then I don't think there's too much to worry
> > about.
>
> I take that back. The series doesn't apply to my current tree. Not too
> unexpected, since it's some weeks old. But more importantly, please send
> this is a "real" patch series. I don't want to see two implementations
> of rw semaphores. I think it's perfectly fine to first do a regular rw
> sem, then a last patch adding the cache friendly variant from Eric and
> converting to that.
>
> In other words, get rid of 3/4.
>
> --
> Jens Axboe

Hi Jens

Here I'm resending it as two patches. The first one uses existing
semaphore, the second converts it to RCU-based percpu semaphore.

Mikulas

---

blockdev: fix a crash when block size is changed and I/O is issued simultaneously

The kernel may crash when block size is changed and I/O is issued
simultaneously.

Because some subsystems (udev or lvm) may read any block device anytime,
the bug actually puts any code that changes a block device size in
jeopardy.

The crash can be reproduced if you place "msleep(1000)" to
blkdev_get_blocks just before "bh->b_size = max_blocks <<
inode->i_blkbits;".
Then, run "dd if=/dev/ram0 of=/dev/null bs=4k count=1 iflag=direct"
While it is waiting in msleep, run "blockdev --setbsz 2048 /dev/ram0"
You get a BUG.

The direct and non-direct I/O is written with the assumption that block
size does not change. It doesn't seem practical to fix these crashes
one-by-one there may be many crash possibilities when block size changes
at a certain place and it is impossible to find them all and verify the
code.

This patch introduces a new rw-lock bd_block_size_semaphore. The lock is
taken for read during I/O. It is taken for write when changing block
size. Consequently, block size can't be changed while I/O is being
submitted.

For asynchronous I/O, the patch only prevents block size change while
the I/O is being submitted. The block size can change when the I/O is in
progress or when the I/O is being finished. This is acceptable because
there are no accesses to block size when asynchronous I/O is being
finished.

The patch prevents block size changing while the device is mapped with
mmap.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
---
drivers/char/raw.c | 2 -
fs/block_dev.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++ +--
include/linux/fs.h | 4 +++
3 files changed, 65 insertions(+), 3 deletions(-)

Index: linux-2.6-copy/include/linux/fs.h
================================================== =================
--- linux-2.6-copy.orig/include/linux/fs.h 2012-09-03 15:55:47.000000000 +0200
+++ linux-2.6-copy/include/linux/fs.h 2012-09-26 00:41:07.000000000 +0200
@@ -724,6 +724,8 @@ struct block_device {
int bd_fsfreeze_count;
/* Mutex for freeze */
struct mutex bd_fsfreeze_mutex;
+ /* A semaphore that prevents I/O while block size is being changed */
+ struct rw_semaphore bd_block_size_semaphore;
};

/*
@@ -2564,6 +2566,8 @@ extern int generic_segment_checks(const
unsigned long *nr_segs, size_t *count, int access_flags);

/* fs/block_dev.c */
+extern ssize_t blkdev_aio_read(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos);
extern ssize_t blkdev_aio_write(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos);
extern int blkdev_fsync(struct file *filp, loff_t start, loff_t end,
Index: linux-2.6-copy/fs/block_dev.c
================================================== =================
--- linux-2.6-copy.orig/fs/block_dev.c 2012-09-03 15:55:44.000000000 +0200
+++ linux-2.6-copy/fs/block_dev.c 2012-09-26 00:42:49.000000000 +0200
@@ -116,6 +116,8 @@ EXPORT_SYMBOL(invalidate_bdev);

int set_blocksize(struct block_device *bdev, int size)
{
+ struct address_space *mapping;
+
/* Size must be a power of two, and between 512 and PAGE_SIZE */
if (size > PAGE_SIZE || size < 512 || !is_power_of_2(size))
return -EINVAL;
@@ -124,6 +126,20 @@ int set_blocksize(struct block_device *b
if (size < bdev_logical_block_size(bdev))
return -EINVAL;

+ /* Prevent starting I/O or mapping the device */
+ down_write(&bdev->bd_block_size_semaphore);
+
+ /* Check that the block device is not memory mapped */
+ mapping = bdev->bd_inode->i_mapping;
+ mutex_lock(&mapping->i_mmap_mutex);
+ if (!prio_tree_empty(&mapping->i_mmap) ||
+ !list_empty(&mapping->i_mmap_nonlinear)) {
+ mutex_unlock(&mapping->i_mmap_mutex);
+ up_write(&bdev->bd_block_size_semaphore);
+ return -EBUSY;
+ }
+ mutex_unlock(&mapping->i_mmap_mutex);
+
/* Don't change the size if it is same as current */
if (bdev->bd_block_size != size) {
sync_blockdev(bdev);
@@ -131,6 +147,9 @@ int set_blocksize(struct block_device *b
bdev->bd_inode->i_blkbits = blksize_bits(size);
kill_bdev(bdev);
}
+
+ up_write(&bdev->bd_block_size_semaphore);
+
return 0;
}

@@ -472,6 +491,7 @@ static void init_once(void *foo)
inode_init_once(&ei->vfs_inode);
/* Initialize mutex for freeze. */
mutex_init(&bdev->bd_fsfreeze_mutex);
+ init_rwsem(&bdev->bd_block_size_semaphore);
}

static inline void __bd_forget(struct inode *inode)
@@ -1567,6 +1587,22 @@ static long block_ioctl(struct file *fil
return blkdev_ioctl(bdev, mode, cmd, arg);
}

+ssize_t blkdev_aio_read(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos)
+{
+ ssize_t ret;
+ struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
+
+ down_read(&bdev->bd_block_size_semaphore);
+
+ ret = generic_file_aio_read(iocb, iov, nr_segs, pos);
+
+ up_read(&bdev->bd_block_size_semaphore);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(blkdev_aio_read);
+
/*
* Write data to the block device. Only intended for the block device itself
* and the raw driver which basically is a fake block device.
@@ -1578,12 +1614,16 @@ ssize_t blkdev_aio_write(struct kiocb *i
unsigned long nr_segs, loff_t pos)
{
struct file *file = iocb->ki_filp;
+ struct block_device *bdev = I_BDEV(file->f_mapping->host);
struct blk_plug plug;
ssize_t ret;

BUG_ON(iocb->ki_pos != pos);

blk_start_plug(&plug);
+
+ down_read(&bdev->bd_block_size_semaphore);
+
ret = __generic_file_aio_write(iocb, iov, nr_segs, &iocb->ki_pos);
if (ret > 0 || ret == -EIOCBQUEUED) {
ssize_t err;
@@ -1592,11 +1632,29 @@ ssize_t blkdev_aio_write(struct kiocb *i
if (err < 0 && ret > 0)
ret = err;
}
+
+ up_read(&bdev->bd_block_size_semaphore);
+
blk_finish_plug(&plug);
+
return ret;
}
EXPORT_SYMBOL_GPL(blkdev_aio_write);

+int blkdev_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ int ret;
+ struct block_device *bdev = I_BDEV(file->f_mapping->host);
+
+ down_read(&bdev->bd_block_size_semaphore);
+
+ ret = generic_file_mmap(file, vma);
+
+ up_read(&bdev->bd_block_size_semaphore);
+
+ return ret;
+}
+
/*
* Try to release a page associated with block device when the system
* is under memory pressure.
@@ -1627,9 +1685,9 @@ const struct file_operations def_blk_fop
.llseek = block_llseek,
.read = do_sync_read,
.write = do_sync_write,
- .aio_read = generic_file_aio_read,
+ .aio_read = blkdev_aio_read,
.aio_write = blkdev_aio_write,
- .mmap = generic_file_mmap,
+ .mmap = blkdev_mmap,
.fsync = blkdev_fsync,
.unlocked_ioctl = block_ioctl,
#ifdef CONFIG_COMPAT
Index: linux-2.6-copy/drivers/char/raw.c
================================================== =================
--- linux-2.6-copy.orig/drivers/char/raw.c 2012-09-01 00:14:45.000000000 +0200
+++ linux-2.6-copy/drivers/char/raw.c 2012-09-26 00:41:07.000000000 +0200
@@ -285,7 +285,7 @@ static long raw_ctl_compat_ioctl(struct

static const struct file_operations raw_fops = {
.read = do_sync_read,
- .aio_read = generic_file_aio_read,
+ .aio_read = blkdev_aio_read,
.write = do_sync_write,
.aio_write = blkdev_aio_write,
.fsync = blkdev_fsync,

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-25-2012, 10:50 PM
Mikulas Patocka
 
Default Fix a crash when block device is read and block size is changed at the same time

blockdev: turn a rw semaphore into a percpu rw semaphore

This avoids cache line bouncing when many processes lock the semaphore
for read.

New percpu lock implementation

The lock consists of an array of percpu unsigned integers, a boolean
variable and a mutex.

When we take the lock for read, we enter rcu read section, check for a
"locked" variable. If it is false, we increase a percpu counter on the
current cpu and exit the rcu section. If "locked" is true, we exit the
rcu section, take the mutex and drop it (this waits until a writer
finished) and retry.

Unlocking for read just decreases percpu variable. Note that we can
unlock on a difference cpu than where we locked, in this case the
counter underflows. The sum of all percpu counters represents the number
of processes that hold the lock for read.

When we need to lock for write, we take the mutex, set "locked" variable
to true and synchronize rcu. Since RCU has been synchronized, no
processes can create new read locks. We wait until the sum of percpu
counters is zero - when it is, there are no readers in the critical
section.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>

---
Documentation/percpu-rw-semaphore.txt | 27 ++++++++++
fs/block_dev.c | 27 ++++++----
include/linux/fs.h | 3 -
include/linux/percpu-rwsem.h | 89 ++++++++++++++++++++++++++++++++++
4 files changed, 135 insertions(+), 11 deletions(-)

---

Index: linux-2.6-copy/fs/block_dev.c
================================================== =================
--- linux-2.6-copy.orig/fs/block_dev.c 2012-09-26 00:42:49.000000000 +0200
+++ linux-2.6-copy/fs/block_dev.c 2012-09-26 00:45:29.000000000 +0200
@@ -127,7 +127,7 @@ int set_blocksize(struct block_device *b
return -EINVAL;

/* Prevent starting I/O or mapping the device */
- down_write(&bdev->bd_block_size_semaphore);
+ percpu_down_write(&bdev->bd_block_size_semaphore);

/* Check that the block device is not memory mapped */
mapping = bdev->bd_inode->i_mapping;
@@ -135,7 +135,7 @@ int set_blocksize(struct block_device *b
if (!prio_tree_empty(&mapping->i_mmap) ||
!list_empty(&mapping->i_mmap_nonlinear)) {
mutex_unlock(&mapping->i_mmap_mutex);
- up_write(&bdev->bd_block_size_semaphore);
+ percpu_up_write(&bdev->bd_block_size_semaphore);
return -EBUSY;
}
mutex_unlock(&mapping->i_mmap_mutex);
@@ -148,7 +148,7 @@ int set_blocksize(struct block_device *b
kill_bdev(bdev);
}

- up_write(&bdev->bd_block_size_semaphore);
+ percpu_up_write(&bdev->bd_block_size_semaphore);

return 0;
}
@@ -460,6 +460,12 @@ static struct inode *bdev_alloc_inode(st
struct bdev_inode *ei = kmem_cache_alloc(bdev_cachep, GFP_KERNEL);
if (!ei)
return NULL;
+
+ if (unlikely(percpu_init_rwsem(&ei->bdev.bd_block_size_semaphore))) {
+ kmem_cache_free(bdev_cachep, ei);
+ return NULL;
+ }
+
return &ei->vfs_inode;
}

@@ -468,6 +474,8 @@ static void bdev_i_callback(struct rcu_h
struct inode *inode = container_of(head, struct inode, i_rcu);
struct bdev_inode *bdi = BDEV_I(inode);

+ percpu_free_rwsem(&bdi->bdev.bd_block_size_semaphore);
+
kmem_cache_free(bdev_cachep, bdi);
}

@@ -491,7 +499,6 @@ static void init_once(void *foo)
inode_init_once(&ei->vfs_inode);
/* Initialize mutex for freeze. */
mutex_init(&bdev->bd_fsfreeze_mutex);
- init_rwsem(&bdev->bd_block_size_semaphore);
}

static inline void __bd_forget(struct inode *inode)
@@ -1593,11 +1600,11 @@ ssize_t blkdev_aio_read(struct kiocb *io
ssize_t ret;
struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);

- down_read(&bdev->bd_block_size_semaphore);
+ percpu_down_read(&bdev->bd_block_size_semaphore);

ret = generic_file_aio_read(iocb, iov, nr_segs, pos);

- up_read(&bdev->bd_block_size_semaphore);
+ percpu_up_read(&bdev->bd_block_size_semaphore);

return ret;
}
@@ -1622,7 +1629,7 @@ ssize_t blkdev_aio_write(struct kiocb *i

blk_start_plug(&plug);

- down_read(&bdev->bd_block_size_semaphore);
+ percpu_down_read(&bdev->bd_block_size_semaphore);

ret = __generic_file_aio_write(iocb, iov, nr_segs, &iocb->ki_pos);
if (ret > 0 || ret == -EIOCBQUEUED) {
@@ -1633,7 +1640,7 @@ ssize_t blkdev_aio_write(struct kiocb *i
ret = err;
}

- up_read(&bdev->bd_block_size_semaphore);
+ percpu_up_read(&bdev->bd_block_size_semaphore);

blk_finish_plug(&plug);

@@ -1646,11 +1653,11 @@ int blkdev_mmap(struct file *file, struc
int ret;
struct block_device *bdev = I_BDEV(file->f_mapping->host);

- down_read(&bdev->bd_block_size_semaphore);
+ percpu_down_read(&bdev->bd_block_size_semaphore);

ret = generic_file_mmap(file, vma);

- up_read(&bdev->bd_block_size_semaphore);
+ percpu_up_read(&bdev->bd_block_size_semaphore);

return ret;
}
Index: linux-2.6-copy/include/linux/fs.h
================================================== =================
--- linux-2.6-copy.orig/include/linux/fs.h 2012-09-26 00:41:07.000000000 +0200
+++ linux-2.6-copy/include/linux/fs.h 2012-09-26 00:45:29.000000000 +0200
@@ -10,6 +10,7 @@
#include <linux/ioctl.h>
#include <linux/blk_types.h>
#include <linux/types.h>
+#include <linux/percpu-rwsem.h>

/*
* It's silly to have NR_OPEN bigger than NR_FILE, but you can change
@@ -725,7 +726,7 @@ struct block_device {
/* Mutex for freeze */
struct mutex bd_fsfreeze_mutex;
/* A semaphore that prevents I/O while block size is being changed */
- struct rw_semaphore bd_block_size_semaphore;
+ struct percpu_rw_semaphore bd_block_size_semaphore;
};

/*
Index: linux-2.6-copy/include/linux/percpu-rwsem.h
================================================== =================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-copy/include/linux/percpu-rwsem.h 2012-09-26 00:45:29.000000000 +0200
@@ -0,0 +1,89 @@
+#ifndef _LINUX_PERCPU_RWSEM_H
+#define _LINUX_PERCPU_RWSEM_H
+
+#include <linux/mutex.h>
+#include <linux/percpu.h>
+#include <linux/rcupdate.h>
+#include <linux/delay.h>
+
+struct percpu_rw_semaphore {
+ unsigned __percpu *counters;
+ bool locked;
+ struct mutex mtx;
+};
+
+static inline void percpu_down_read(struct percpu_rw_semaphore *p)
+{
+ rcu_read_lock();
+ if (unlikely(p->locked)) {
+ rcu_read_unlock();
+ mutex_lock(&p->mtx);
+ this_cpu_inc(*p->counters);
+ mutex_unlock(&p->mtx);
+ return;
+ }
+ this_cpu_inc(*p->counters);
+ rcu_read_unlock();
+}
+
+static inline void percpu_up_read(struct percpu_rw_semaphore *p)
+{
+ /*
+ * On X86, write operation in this_cpu_dec serves as a memory unlock
+ * barrier (i.e. memory accesses may be moved before the write, but
+ * no memory accesses are moved past the write).
+ * On other architectures this may not be the case, so we need smp_mb()
+ * there.
+ */
+#if defined(CONFIG_X86) && (!defined(CONFIG_X86_PPRO_FENCE) && !defined(CONFIG_X86_OOSTORE))
+ barrier();
+#else
+ smp_mb();
+#endif
+ this_cpu_dec(*p->counters);
+}
+
+static inline unsigned __percpu_count(unsigned __percpu *counters)
+{
+ unsigned total = 0;
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ total += ACCESS_ONCE(*per_cpu_ptr(counters, cpu));
+
+ return total;
+}
+
+static inline void percpu_down_write(struct percpu_rw_semaphore *p)
+{
+ mutex_lock(&p->mtx);
+ p->locked = true;
+ synchronize_rcu();
+ while (__percpu_count(p->counters))
+ msleep(1);
+ smp_rmb(); /* paired with smp_mb() in percpu_sem_up_read() */
+}
+
+static inline void percpu_up_write(struct percpu_rw_semaphore *p)
+{
+ p->locked = false;
+ mutex_unlock(&p->mtx);
+}
+
+static inline int percpu_init_rwsem(struct percpu_rw_semaphore *p)
+{
+ p->counters = alloc_percpu(unsigned);
+ if (unlikely(!p->counters))
+ return -ENOMEM;
+ p->locked = false;
+ mutex_init(&p->mtx);
+ return 0;
+}
+
+static inline void percpu_free_rwsem(struct percpu_rw_semaphore *p)
+{
+ free_percpu(p->counters);
+ p->counters = NULL; /* catch use after free bugs */
+}
+
+#endif
Index: linux-2.6-copy/Documentation/percpu-rw-semaphore.txt
================================================== =================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-copy/Documentation/percpu-rw-semaphore.txt 2012-09-26 00:45:29.000000000 +0200
@@ -0,0 +1,27 @@
+Percpu rw semaphores
+--------------------
+
+Percpu rw semaphores is a new read-write semaphore design that is
+optimized for locking for reading.
+
+The problem with traditional read-write semaphores is that when multiple
+cores take the lock for reading, the cache line containing the semaphore
+is bouncing between L1 caches of the cores, causing performance
+degradation.
+
+Locking for reading it very fast, it uses RCU and it avoids any atomic
+instruction in the lock and unlock path. On the other hand, locking for
+writing is very expensive, it calls synchronize_rcu() that can take
+hundreds of microseconds.
+
+The lock is declared with "struct percpu_rw_semaphore" type.
+The lock is initialized percpu_init_rwsem, it returns 0 on success and
+-ENOMEM on allocation failure.
+The lock must be freed with percpu_free_rwsem to avoid memory leak.
+
+The lock is locked for read with percpu_down_read, percpu_up_read and
+for write with percpu_down_write, percpu_up_write.
+
+The idea of using RCU for optimized rw-lock was introduced by
+Eric Dumazet <eric.dumazet@gmail.com>.
+The code was written by Mikulas Patocka <mpatocka@redhat.com>

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-25-2012, 10:58 PM
Mikulas Patocka
 
Default Fix a crash when block device is read and block size is changed at the same time

On Tue, 25 Sep 2012, Jeff Moyer wrote:

> Jeff Moyer <jmoyer@redhat.com> writes:
>
> > Mikulas Patocka <mpatocka@redhat.com> writes:
> >
> >> Hi Jeff
> >>
> >> Thanks for testing.
> >>
> >> It would be interesting ... what happens if you take the patch 3, leave
> >> "struct percpu_rw_semaphore bd_block_size_semaphore" in "struct
> >> block_device", but remove any use of the semaphore from fs/block_dev.c? -
> >> will the performance be like unpatched kernel or like patch 3? It could be
> >> that the change in the alignment affects performance on your CPU too, just
> >> differently than on my CPU.
> >
> > It turns out to be exactly the same performance as with the 3rd patch
> > applied, so I guess it does have something to do with cache alignment.
> > Here is the patch (against vanilla) I ended up testing. Let me know if
> > I've botched it somehow.
> >
> > So, I next up I'll play similar tricks to what you did (padding struct
> > block_device in all kernels) to eliminate the differences due to
> > structure alignment and provide a clear picture of what the locking
> > effects are.
>
> After trying again with the same padding you used in the struct
> bdev_inode, I see no performance differences between any of the
> patches. I tried bumping up the number of threads to saturate the
> number of cpus on a single NUMA node on my hardware, but that resulted
> in lower IOPS to the device, and hence consumption of less CPU time.
> So, I believe my results to be inconclusive.

For me, the fourth patch with RCU-based locks performed better, so I am
submitting that.

> After talking with Vivek about the problem, he had mentioned that it
> might be worth investigating whether bd_block_size could be protected
> using SRCU. I looked into it, and the one thing I couldn't reconcile is
> updating both the bd_block_size and the inode->i_blkbits at the same
> time. It would involve (afaiui) adding fields to both the inode and the
> block_device data structures and using rcu_assign_pointer and
> rcu_dereference to modify and access the fields, and both fields would
> need to protected by the same struct srcu_struct. I'm not sure whether
> that's a desirable approach. When I started to implement it, it got
> ugly pretty quickly. What do others think?

Using RCU doesn't seem sensible to me (except for lock implementation, as
it is in patch 4). The major problem is that the block layer reads
blocksize multiple times and when different values are read, a crash may
happen - RCU doesn't protect you against that - if you read a variable
multiple times in a RCU-protected section, you can still get different
results.

If we wanted to use RCU, we would have to read blocksize just once and
pass the value between all functions involved - that would result in a
massive code change.

> For now, my preference is to get the full patch set in. I will continue
> to investigate the performance impact of the data structure size changes
> that I've been seeing.

Yes, we should get the patches to the kernel.

Mikulas

> So, for the four patches:
>
> Acked-by: Jeff Moyer <jmoyer@redhat.com>
>
> Jens, can you have a look at the patch set? We are seeing problem
> reports of this in the wild[1][2].
>
> Cheers,
> Jeff
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=824107
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=812129
>

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-26-2012, 05:48 AM
Jens Axboe
 
Default Fix a crash when block device is read and block size is changed at the same time

On 2012-09-26 00:49, Mikulas Patocka wrote:
> Here I'm resending it as two patches. The first one uses existing
> semaphore, the second converts it to RCU-based percpu semaphore.

Thanks, applied. In the future, please send new patch 'series' as a new
thread instead of replying to some email in the middle of an existing
thread. It all becomes very messy pretty quickly. Patch #2 had a botched
subject line, so that didn't help either :-)

--
Jens Axboe

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-26-2012, 02:35 PM
Mikulas Patocka
 
Default Fix a crash when block device is read and block size is changed at the same time

On Wed, 26 Sep 2012, Jeff Moyer wrote:

> Mikulas Patocka <mpatocka@redhat.com> writes:
>
> > On Tue, 25 Sep 2012, Jeff Moyer wrote:
> >
> >> Jeff Moyer <jmoyer@redhat.com> writes:
> >>
> >> > Mikulas Patocka <mpatocka@redhat.com> writes:
> >> >
> >> >> Hi Jeff
> >> >>
> >> >> Thanks for testing.
> >> >>
> >> >> It would be interesting ... what happens if you take the patch 3, leave
> >> >> "struct percpu_rw_semaphore bd_block_size_semaphore" in "struct
> >> >> block_device", but remove any use of the semaphore from fs/block_dev.c? -
> >> >> will the performance be like unpatched kernel or like patch 3? It could be
> >> >> that the change in the alignment affects performance on your CPU too, just
> >> >> differently than on my CPU.
> >> >
> >> > It turns out to be exactly the same performance as with the 3rd patch
> >> > applied, so I guess it does have something to do with cache alignment.
> >> > Here is the patch (against vanilla) I ended up testing. Let me know if
> >> > I've botched it somehow.
> >> >
> >> > So, I next up I'll play similar tricks to what you did (padding struct
> >> > block_device in all kernels) to eliminate the differences due to
> >> > structure alignment and provide a clear picture of what the locking
> >> > effects are.
> >>
> >> After trying again with the same padding you used in the struct
> >> bdev_inode, I see no performance differences between any of the
> >> patches. I tried bumping up the number of threads to saturate the
> >> number of cpus on a single NUMA node on my hardware, but that resulted
> >> in lower IOPS to the device, and hence consumption of less CPU time.
> >> So, I believe my results to be inconclusive.
> >
> > For me, the fourth patch with RCU-based locks performed better, so I am
> > submitting that.
> >
> >> After talking with Vivek about the problem, he had mentioned that it
> >> might be worth investigating whether bd_block_size could be protected
> >> using SRCU. I looked into it, and the one thing I couldn't reconcile is
> >> updating both the bd_block_size and the inode->i_blkbits at the same
> >> time. It would involve (afaiui) adding fields to both the inode and the
> >> block_device data structures and using rcu_assign_pointer and
> >> rcu_dereference to modify and access the fields, and both fields would
> >> need to protected by the same struct srcu_struct. I'm not sure whether
> >> that's a desirable approach. When I started to implement it, it got
> >> ugly pretty quickly. What do others think?
> >
> > Using RCU doesn't seem sensible to me (except for lock implementation, as
> > it is in patch 4). The major problem is that the block layer reads
> > blocksize multiple times and when different values are read, a crash may
> > happen - RCU doesn't protect you against that - if you read a variable
> > multiple times in a RCU-protected section, you can still get different
> > results.
>
> SRCU is sleepable, so could be (I think) used in the same manner as your
> rw semaphore. The only difference is that it would require changing the
> bd_blocksize and the i_blkbits to pointers and protecting them both with
> the same srcu struct. Then, the inode i_blkbits would also need to be
> special cased, so that we only require such handling when it is
> associated with a block device. It got messy.

No, it couldn't be used this way.

If you do
srcu_read_lock(&srcu)
ptr1 = srcu_dereference(pointer, &srcu);
ptr2 = srcu_dereference(pointer, &srcu);
srcu_read_unlock(&srcu)

it doesn't guarantee that ptr1 == ptr2.

All that it guarantees is that when synchronize_srcu exits, there are no
references to the old structure. But after rcu_assign_pointer and before
synchronize_srcu exits, readers can read both old and new value of the
pointer and it is not specified which value do they read.

> > If we wanted to use RCU, we would have to read blocksize just once and
> > pass the value between all functions involved - that would result in a
> > massive code change.
>
> If we did that, we wouldn't need rcu at all, would we?

Yes, we wouldn't need RCU then.

Mikulas

> Cheers,
> Jeff
>

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 

Thread Tools




All times are GMT. The time now is 07:19 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org