FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Ubuntu > Ubuntu Kernel Team

 
 
LinkBack Thread Tools
 
Old 02-06-2011, 10:20 PM
Ken Stailey
 
Default KVM: add schedule check to napi_enable call

SRU Justification:

Impact: Under heavy network I/O load virtio-net driver crashes making VM guest unusable.

Testcase: I left a current Lucid VM running two concurrent "scp -r" of > 200 GB from NFS read-only source to a physical remote host overnight. VM quickly started emitting "page allocation errors" in the system log. Next morning when I checked the VM I could still ping it but could not establish an SSH connection.*

I put the patch in to ppa:nutznboltz/lucid-virtio-napi and applied it to the same machine and that VM did not crash as a result of copying the same data.

$ uname -a
Linux dubnium 2.6.32-28-server #55ubuntu1~ppa3~lucid1-Ubuntu SMP Sun Feb 6 01:03:25 UTC 2011 x86_64 GNU/Linux


diff -u linux-2.6.32/drivers/net/virtio_net.c linux-2.6.32/drivers/net/virtio_net.c
--- linux-2.6.32/drivers/net/virtio_net.c
+++ linux-2.6.32/drivers/net/virtio_net.c
@@ -391,6 +391,20 @@
*** }
}

+static void virtnet_napi_enable(struct virtnet_info *vi)
+{
+*** napi_enable(&vi->napi);
+
+*** /* If all buffers were filled by other side before we napi_enabled, we
+****** won't get another interrupt, so process any outstanding packets
+****** now.* virtnet_poll wants re-enable the queue, so we disable here.
+****** We synchronize against interrupts via NAPI_STATE_SCHED */
+*** if (napi_schedule_prep(&vi->napi)) {
+*** *** vi->rvq->vq_ops->disable_cb(vi->rvq);
+*** *** __napi_schedule(&vi->napi);
+*** }
+}
+
static void refill_work(struct work_struct *work)
{
*** struct virtnet_info *vi;
@@ -399,7 +413,7 @@
*** vi = container_of(work, struct virtnet_info, refill.work);
*** napi_disable(&vi->napi);
*** still_empty = !try_fill_recv(vi, GFP_KERNEL);
-*** napi_enable(&vi->napi);
+*** virtnet_napi_enable(vi);

*** /* In theory, this can happen: if we don't get any buffers in
****** we will *never* try to fill again. */
@@ -591,16 +605,7 @@
{
*** struct virtnet_info *vi = netdev_priv(dev);

-*** napi_enable(&vi->napi);
-
-*** /* If all buffers were filled by other side before we napi_enabled, we
-****** won't get another interrupt, so process any outstanding packets
-****** now.* virtnet_poll wants re-enable the queue, so we disable here.
-****** We synchronize against interrupts via NAPI_STATE_SCHED */
-*** if (napi_schedule_prep(&vi->napi)) {
-*** *** vi->rvq->vq_ops->disable_cb(vi->rvq);
-*** *** __napi_schedule(&vi->napi);
-*** }
+*** virtnet_napi_enable(vi);
*** return 0;
}





--
kernel-team mailing list
kernel-team@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/kernel-team
 
Old 02-11-2011, 03:27 PM
Stefan Bader
 
Default KVM: add schedule check to napi_enable call

I went ahead and applied the upstream patch (slightly adapted for Lucid) to
Lucid and Maverick's master-next branch.

--
kernel-team mailing list
kernel-team@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/kernel-team
 
Old 02-11-2011, 05:40 PM
Andy Whitcroft
 
Default KVM: add schedule check to napi_enable call

On Fri, Feb 11, 2011 at 05:27:01PM +0100, Stefan Bader wrote:
> I went ahead and applied the upstream patch (slightly adapted for Lucid) to
> Lucid and Maverick's master-next branch.

This is in the next batch of updates coming from Linus, will be in
v2.6.38-rc5, so I'll get it shortly in Natty by default.

-apw

--
kernel-team mailing list
kernel-team@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/kernel-team
 

Thread Tools




All times are GMT. The time now is 12:59 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org