FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

 
 
LinkBack Thread Tools
 
Old 09-22-2010, 11:06 PM
Malahal Naineni
 
Default block: set the bounce_pfn to the actual DMA limit rather than to max memory

Jens, any comments on this patch?

Thanks, Malahal.

Malahal Naineni [malahal@us.ibm.com] wrote:
> The bounce_pfn of the request queue in 64 bit systems is set to the
> current max_low_pfn. Adding more memory later makes this incorrect.
> Memory allocated beyond this boot time max_low_pfn appear to require
> bounce buffers (bounce buffers are actually not allocated but used in
> calculating segments that may result in "over max segments limit"
> errors).
>
> Signed-off-by: Malahal Naineni (malahal@us.ibm.com)
>
> diff -r 09daf852c1c5 -r c9516154fabc block/blk-settings.c
> --- a/block/blk-settings.c Thu Sep 09 12:10:43 2010 -0700
> +++ b/block/blk-settings.c Mon Sep 13 10:15:24 2010 -0700
> @@ -214,16 +214,14 @@ void blk_queue_bounce_limit(struct reque
> */
> if (b_pfn < (min_t(u64, 0xffffffffUL, BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
> dma = 1;
> - q->limits.bounce_pfn = max_low_pfn;
> #else
> if (b_pfn < blk_max_low_pfn)
> dma = 1;
> +#endif
> q->limits.bounce_pfn = b_pfn;
> -#endif
> if (dma) {
> init_emergency_isa_pool();
> q->limits.bounce_gfp = GFP_NOIO | GFP_DMA;
> - q->limits.bounce_pfn = b_pfn;
> }
> }
> EXPORT_SYMBOL(blk_queue_bounce_limit);
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-24-2010, 01:58 PM
Jens Axboe
 
Default block: set the bounce_pfn to the actual DMA limit rather than to max memory

On 2010-09-22 00:22, Malahal Naineni wrote:
> The bounce_pfn of the request queue in 64 bit systems is set to the
> current max_low_pfn. Adding more memory later makes this incorrect.

Clearly correct.

> Memory allocated beyond this boot time max_low_pfn appear to require
> bounce buffers (bounce buffers are actually not allocated but used in
> calculating segments that may result in "over max segments limit"
> errors).

But I can't quite convince myself that the change is fully correct. You
don't really explain in your own words what the patch does, just what it
fixes.


--
Jens Axboe

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-24-2010, 05:05 PM
Malahal Naineni
 
Default block: set the bounce_pfn to the actual DMA limit rather than to max memory

Jens Axboe [jaxboe@fusionio.com] wrote:
> On 2010-09-22 00:22, Malahal Naineni wrote:
> > The bounce_pfn of the request queue in 64 bit systems is set to the
> > current max_low_pfn. Adding more memory later makes this incorrect.
>
> Clearly correct.
>
> > Memory allocated beyond this boot time max_low_pfn appear to require
> > bounce buffers (bounce buffers are actually not allocated but used in
> > calculating segments that may result in "over max segments limit"
> > errors).
>
> But I can't quite convince myself that the change is fully correct. You
> don't really explain in your own words what the patch does, just what it
> fixes.

OK, the problem is we get "over max segments limit" errors from
blk_rq_check_limits() after adding memory. The actual bug is in
blk_phys_contig_segment() where it doesn't check for possibility of
bounce buffers. This results in merging more requests in
ll_merge_requests_fn(). Later, blk_recalc_rq_segments() call from
blk_rq_check_limits() actually uses the possibility of bounce buffers,
so the calculated number of segments exceed q's max_segments resulting
in the above error.

Fix for the actual problem is posted here:
http://permalink.gmane.org/gmane.linux.kernel.device-mapper.devel/12426

So clearly the bug should manifest only when 'bounce buffers' are
involved! Applying the above patch indeed_fixed_ the problem, but I
know there shouldn't be any need for bounce buffers on our system, and
further investigation revealed that DMA limit is not set correctly.

This patch also _fixed_ our problem. So we are fine with either patch,
but this patch is preferred as it enables more request merges. Also,
both patches maybe needed for some configurations.

Thanks, Malahal.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-24-2010, 06:28 PM
Jens Axboe
 
Default block: set the bounce_pfn to the actual DMA limit rather than to max memory

On 2010-09-24 19:05, Malahal Naineni wrote:
> Jens Axboe [jaxboe@fusionio.com] wrote:
>> On 2010-09-22 00:22, Malahal Naineni wrote:
>>> The bounce_pfn of the request queue in 64 bit systems is set to the
>>> current max_low_pfn. Adding more memory later makes this incorrect.
>>
>> Clearly correct.
>>
>>> Memory allocated beyond this boot time max_low_pfn appear to require
>>> bounce buffers (bounce buffers are actually not allocated but used in
>>> calculating segments that may result in "over max segments limit"
>>> errors).
>>
>> But I can't quite convince myself that the change is fully correct. You
>> don't really explain in your own words what the patch does, just what it
>> fixes.
>
> OK, the problem is we get "over max segments limit" errors from
> blk_rq_check_limits() after adding memory. The actual bug is in
> blk_phys_contig_segment() where it doesn't check for possibility of
> bounce buffers. This results in merging more requests in
> ll_merge_requests_fn(). Later, blk_recalc_rq_segments() call from
> blk_rq_check_limits() actually uses the possibility of bounce buffers,
> so the calculated number of segments exceed q's max_segments resulting
> in the above error.
>
> Fix for the actual problem is posted here:
> http://permalink.gmane.org/gmane.linux.kernel.device-mapper.devel/12426
>
> So clearly the bug should manifest only when 'bounce buffers' are
> involved! Applying the above patch indeed_fixed_ the problem, but I
> know there shouldn't be any need for bounce buffers on our system, and
> further investigation revealed that DMA limit is not set correctly.
>
> This patch also _fixed_ our problem. So we are fine with either patch,
> but this patch is preferred as it enables more request merges. Also,
> both patches maybe needed for some configurations.

Plus it doesn't needlessly bounce, that's the real problem you want to
fix. I have applied this thread patch to for-2.6.37/core, thanks.

--
Jens Axboe

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-24-2010, 07:20 PM
Malahal Naineni
 
Default block: set the bounce_pfn to the actual DMA limit rather than to max memory

Jens Axboe [jaxboe@fusionio.com] wrote:
> > This patch also _fixed_ our problem. So we are fine with either patch,
> > but this patch is preferred as it enables more request merges. Also,
> > both patches maybe needed for some configurations.
>
> Plus it doesn't needlessly bounce, that's the real problem you want to
> fix. I have applied this thread patch to for-2.6.37/core, thanks.

There is a shortcut check in blk_queue_bounce() that uses blk_max_pfn to
return without doing anything. blk_max_pfn is not updated when we do
hot-plug memory add, so bounce buffers are NOT really used in our case
(thankfully)!

Here is the code that may need some fix in future:

if (!(q->limits.bounce_gfp & GFP_DMA)) {
if (queue_bounce_pfn(q) >= blk_max_pfn)
return;


Thank you so much for applying this patch.

Thanks, Malahal.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 09-24-2010, 07:26 PM
Jens Axboe
 
Default block: set the bounce_pfn to the actual DMA limit rather than to max memory

On 2010-09-24 21:20, Malahal Naineni wrote:
> Jens Axboe [jaxboe@fusionio.com] wrote:
>>> This patch also _fixed_ our problem. So we are fine with either patch,
>>> but this patch is preferred as it enables more request merges. Also,
>>> both patches maybe needed for some configurations.
>>
>> Plus it doesn't needlessly bounce, that's the real problem you want to
>> fix. I have applied this thread patch to for-2.6.37/core, thanks.
>
> There is a shortcut check in blk_queue_bounce() that uses blk_max_pfn to
> return without doing anything. blk_max_pfn is not updated when we do
> hot-plug memory add, so bounce buffers are NOT really used in our case
> (thankfully)!

Any reason we can't just add a hot mem add notifier and update the block
copies when we need to?

--
Jens Axboe

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 10-01-2010, 02:30 AM
Malahal Naineni
 
Default block: set the bounce_pfn to the actual DMA limit rather than to max memory

Jens Axboe [jaxboe@fusionio.com] wrote:
> On 2010-09-24 21:20, Malahal Naineni wrote:
> > Jens Axboe [jaxboe@fusionio.com] wrote:
> >>> This patch also _fixed_ our problem. So we are fine with either patch,
> >>> but this patch is preferred as it enables more request merges. Also,
> >>> both patches maybe needed for some configurations.
> >>
> >> Plus it doesn't needlessly bounce, that's the real problem you want to
> >> fix. I have applied this thread patch to for-2.6.37/core, thanks.
> >
> > There is a shortcut check in blk_queue_bounce() that uses blk_max_pfn to
> > return without doing anything. blk_max_pfn is not updated when we do
> > hot-plug memory add, so bounce buffers are NOT really used in our case
> > (thankfully)!
>
> Any reason we can't just add a hot mem add notifier and update the block
> copies when we need to?

Actually it is quite simple, but power arch guys decided not to update
max_pfn and max_low_pfn when someone adds or removes memory. They do
seem to indicate that it is not an over sight! On the other hand, x86
arch does update those.

Another approach would be defining max_possible_pfn (maximum allowable
memory) which would be constant (based on architecture) and hopefully
adapters that we care still take the short-cut.

Also, this patch broke some USB hosts. Can you please back out this
patch and apply the one below instead:

http://thread.gmane.org/gmane.linux.kernel.next/14158

Thank you very much,
Malahal.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 10-01-2010, 12:46 PM
Jens Axboe
 
Default block: set the bounce_pfn to the actual DMA limit rather than to max memory

On 2010-10-01 04:30, Malahal Naineni wrote:
> Jens Axboe [jaxboe@fusionio.com] wrote:
>> On 2010-09-24 21:20, Malahal Naineni wrote:
>>> Jens Axboe [jaxboe@fusionio.com] wrote:
>>>>> This patch also _fixed_ our problem. So we are fine with either patch,
>>>>> but this patch is preferred as it enables more request merges. Also,
>>>>> both patches maybe needed for some configurations.
>>>>
>>>> Plus it doesn't needlessly bounce, that's the real problem you want to
>>>> fix. I have applied this thread patch to for-2.6.37/core, thanks.
>>>
>>> There is a shortcut check in blk_queue_bounce() that uses blk_max_pfn to
>>> return without doing anything. blk_max_pfn is not updated when we do
>>> hot-plug memory add, so bounce buffers are NOT really used in our case
>>> (thankfully)!
>>
>> Any reason we can't just add a hot mem add notifier and update the block
>> copies when we need to?
>
> Actually it is quite simple, but power arch guys decided not to update
> max_pfn and max_low_pfn when someone adds or removes memory. They do
> seem to indicate that it is not an over sight! On the other hand, x86
> arch does update those.
>
> Another approach would be defining max_possible_pfn (maximum allowable
> memory) which would be constant (based on architecture) and hopefully
> adapters that we care still take the short-cut.
>
> Also, this patch broke some USB hosts. Can you please back out this
> patch and apply the one below instead:
>
> http://thread.gmane.org/gmane.linux.kernel.next/14158

Done, thanks.

--
Jens Axboe

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 

Thread Tools




All times are GMT. The time now is 09:04 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org