FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Debian > Debian Kernel

 
 
LinkBack Thread Tools
 
Old 07-17-2012, 11:25 PM
Jonathan Nieder
 
Default Bug#674153: High reported CPU load when idle

tags 674153 = upstream patch moreinfo
quit

Hi again,

Anders Boström wrote:

> Starting with 3.2.17-1, the CPU load accounting is broken when the
> computer is idle. The CPU load is reported as >0.50 when
> idle. 3.2.16-1 don't suffer from this problem.
>
> Suspected patch is the upstream patch
> "sched: Fix nohz load accounting -- again!"
> commit 5e2d50da11f0e6ec3ce8fe658d7c83b0b4346c68 to 3.2 and
> originating from c308b56b5398779cd3da0f62ab26b0453494c3d4 .

Please test the attached patch against a 3.2.y kernel, for example
following the instructions below:

0. prerequisites:

apt-get install git build-essential

1. get the kernel history, if you don't already have it:

git clone
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

2. fetch point releases:

cd linux
git remote add stable
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
git fetch stable

3. configure, build, test:

git checkout stable/linux-3.2.y
cp /boot/config-$(uname -r) .config; # current configuration
scripts/config --disable DEBUG_INFO
make localmodconfig; # optional: minimize configuration
make deb-pkg; # optionally with -j<num> for parallel build
dpkg -i ../<name of package>; # as root
reboot
... test test test ...

Hopefully that will reproduce the bug. So

4. try the patch:

cd linux
git am -3sc /path/to/the/patch
dpkg -i ../<name of package>; # as root
reboot
... test test test ...

Hope that helps,
Jonathan
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
Date: Fri, 22 Jun 2012 15:52:09 +0200
Subject: sched/nohz: Rewrite and fix load-avg computation -- again

commit 5167e8d5417bf5c322a703d2927daec727ea40dd upstream.

Thanks to Charles Wang for spotting the defects in the current code:

- If we go idle during the sample window -- after sampling, we get a
negative bias because we can negate our own sample.

- If we wake up during the sample window we get a positive bias
because we push the sample to a known active period.

So rewrite the entire nohz load-avg muck once again, now adding
copious documentation to the code.

Reported-and-tested-by: Doug Smythies <dsmythies@telus.net>
Reported-and-tested-by: Charles Wang <muming.wq@gmail.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: stable@kernel.org
Link: http://lkml.kernel.org/r/1340373782.18025.74.camel@twins
[ minor edits ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
---
include/linux/sched.h | 8 ++
kernel/sched.c | 276 ++++++++++++++++++++++++++++++++++------------
kernel/sched_idletask.c | 1 -
kernel/time/tick-sched.c | 2 +
4 files changed, 213 insertions(+), 74 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 1c4f3e9b9bc5..5afa2a345ab1 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1892,6 +1892,14 @@ static inline int set_cpus_allowed_ptr(struct task_struct *p,
}
#endif

+#ifdef CONFIG_NO_HZ
+void calc_load_enter_idle(void);
+void calc_load_exit_idle(void);
+#else
+static inline void calc_load_enter_idle(void) { }
+static inline void calc_load_exit_idle(void) { }
+#endif /* CONFIG_NO_HZ */
+
#ifndef CONFIG_CPUMASK_OFFSTACK
static inline int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
{
diff --git a/kernel/sched.c b/kernel/sched.c
index 576a27fa5efc..52ac69b6d4c7 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -1885,7 +1885,6 @@ static void double_rq_unlock(struct rq *rq1, struct rq *rq2)

#endif

-static void calc_load_account_idle(struct rq *this_rq);
static void update_sysctl(void);
static int get_update_sysctl_factor(void);
static void update_cpu_load(struct rq *this_rq);
@@ -3401,11 +3400,73 @@ unsigned long this_cpu_load(void)
}


+/*
+ * Global load-average calculations
+ *
+ * We take a distributed and async approach to calculating the global load-avg
+ * in order to minimize overhead.
+ *
+ * The global load average is an exponentially decaying average of nr_running +
+ * nr_uninterruptible.
+ *
+ * Once every LOAD_FREQ:
+ *
+ * nr_active = 0;
+ * for_each_possible_cpu(cpu)
+ * nr_active += cpu_of(cpu)->nr_running + cpu_of(cpu)->nr_uninterruptible;
+ *
+ * avenrun[n] = avenrun[0] * exp_n + nr_active * (1 - exp_n)
+ *
+ * Due to a number of reasons the above turns in the mess below:
+ *
+ * - for_each_possible_cpu() is prohibitively expensive on machines with
+ * serious number of cpus, therefore we need to take a distributed approach
+ * to calculating nr_active.
+ *
+ * Sum_i x_i(t) = Sum_i x_i(t) - x_i(t_0) | x_i(t_0) := 0
+ * = Sum_i { Sum_j=1 x_i(t_j) - x_i(t_j-1) }
+ *
+ * So assuming nr_active := 0 when we start out -- true per definition, we
+ * can simply take per-cpu deltas and fold those into a global accumulate
+ * to obtain the same result. See calc_load_fold_active().
+ *
+ * Furthermore, in order to avoid synchronizing all per-cpu delta folding
+ * across the machine, we assume 10 ticks is sufficient time for every
+ * cpu to have completed this task.
+ *
+ * This places an upper-bound on the IRQ-off latency of the machine. Then
+ * again, being late doesn't loose the delta, just wrecks the sample.
+ *
+ * - cpu_rq()->nr_uninterruptible isn't accurately tracked per-cpu because
+ * this would add another cross-cpu cacheline miss and atomic operation
+ * to the wakeup path. Instead we increment on whatever cpu the task ran
+ * when it went into uninterruptible state and decrement on whatever cpu
+ * did the wakeup. This means that only the sum of nr_uninterruptible over
+ * all cpus yields the correct result.
+ *
+ * This covers the NO_HZ=n code, for extra head-aches, see the comment below.
+ */
+
/* Variables and functions for calc_load */
static atomic_long_t calc_load_tasks;
static unsigned long calc_load_update;
unsigned long avenrun[3];
-EXPORT_SYMBOL(avenrun);
+EXPORT_SYMBOL(avenrun); /* should be removed */
+
+/**
+ * get_avenrun - get the load average array
+ * @loads: pointer to dest load array
+ * @offset: offset to add
+ * @shift: shift count to shift the result left
+ *
+ * These values are estimates at best, so no need for locking.
+ */
+void get_avenrun(unsigned long *loads, unsigned long offset, int shift)
+{
+ loads[0] = (avenrun[0] + offset) << shift;
+ loads[1] = (avenrun[1] + offset) << shift;
+ loads[2] = (avenrun[2] + offset) << shift;
+}

static long calc_load_fold_active(struct rq *this_rq)
{
@@ -3422,6 +3483,9 @@ static long calc_load_fold_active(struct rq *this_rq)
return delta;
}

+/*
+ * a1 = a0 * e + a * (1 - e)
+ */
static unsigned long
calc_load(unsigned long load, unsigned long exp, unsigned long active)
{
@@ -3433,30 +3497,118 @@ calc_load(unsigned long load, unsigned long exp, unsigned long active)

#ifdef CONFIG_NO_HZ
/*
- * For NO_HZ we delay the active fold to the next LOAD_FREQ update.
+ * Handle NO_HZ for the global load-average.
+ *
+ * Since the above described distributed algorithm to compute the global
+ * load-average relies on per-cpu sampling from the tick, it is affected by
+ * NO_HZ.
+ *
+ * The basic idea is to fold the nr_active delta into a global idle-delta upon
+ * entering NO_HZ state such that we can include this as an 'extra' cpu delta
+ * when we read the global state.
+ *
+ * Obviously reality has to ruin such a delightfully simple scheme:
+ *
+ * - When we go NO_HZ idle during the window, we can negate our sample
+ * contribution, causing under-accounting.
+ *
+ * We avoid this by keeping two idle-delta counters and flipping them
+ * when the window starts, thus separating old and new NO_HZ load.
+ *
+ * The only trick is the slight shift in index flip for read vs write.
+ *
+ * 0s 5s 10s 15s
+ * +10 +10 +10 +10
+ * |-|-----------|-|-----------|-|-----------|-|
+ * r:0 0 1 1 0 0 1 1 0
+ * w:0 1 1 0 0 1 1 0 0
+ *
+ * This ensures we'll fold the old idle contribution in this window while
+ * accumlating the new one.
+ *
+ * - When we wake up from NO_HZ idle during the window, we push up our
+ * contribution, since we effectively move our sample point to a known
+ * busy state.
+ *
+ * This is solved by pushing the window forward, and thus skipping the
+ * sample, for this cpu (effectively using the idle-delta for this cpu which
+ * was in effect at the time the window opened). This also solves the issue
+ * of having to deal with a cpu having been in NOHZ idle for multiple
+ * LOAD_FREQ intervals.
*
* When making the ILB scale, we should try to pull this in as well.
*/
-static atomic_long_t calc_load_tasks_idle;
+static atomic_long_t calc_load_idle[2];
+static int calc_load_idx;

-static void calc_load_account_idle(struct rq *this_rq)
+static inline int calc_load_write_idx(void)
{
+ int idx = calc_load_idx;
+
+ /*
+ * See calc_global_nohz(), if we observe the new index, we also
+ * need to observe the new update time.
+ */
+ smp_rmb();
+
+ /*
+ * If the folding window started, make sure we start writing in the
+ * next idle-delta.
+ */
+ if (!time_before(jiffies, calc_load_update))
+ idx++;
+
+ return idx & 1;
+}
+
+static inline int calc_load_read_idx(void)
+{
+ return calc_load_idx & 1;
+}
+
+void calc_load_enter_idle(void)
+{
+ struct rq *this_rq = this_rq();
long delta;

+ /*
+ * We're going into NOHZ mode, if there's any pending delta, fold it
+ * into the pending idle delta.
+ */
delta = calc_load_fold_active(this_rq);
- if (delta)
- atomic_long_add(delta, &calc_load_tasks_idle);
+ if (delta) {
+ int idx = calc_load_write_idx();
+ atomic_long_add(delta, &calc_load_idle[idx]);
+ }
+}
+
+void calc_load_exit_idle(void)
+{
+ struct rq *this_rq = this_rq();
+
+ /*
+ * If we're still before the sample window, we're done.
+ */
+ if (time_before(jiffies, this_rq->calc_load_update))
+ return;
+
+ /*
+ * We woke inside or after the sample window, this means we're already
+ * accounted through the nohz accounting, so skip the entire deal and
+ * sync up for the next window.
+ */
+ this_rq->calc_load_update = calc_load_update;
+ if (time_before(jiffies, this_rq->calc_load_update + 10))
+ this_rq->calc_load_update += LOAD_FREQ;
}

static long calc_load_fold_idle(void)
{
+ int idx = calc_load_read_idx();
long delta = 0;

- /*
- * Its got a race, we don't care...
- */
- if (atomic_long_read(&calc_load_tasks_idle))
- delta = atomic_long_xchg(&calc_load_tasks_idle, 0);
+ if (atomic_long_read(&calc_load_idle[idx]))
+ delta = atomic_long_xchg(&calc_load_idle[idx], 0);

return delta;
}
@@ -3542,66 +3694,39 @@ static void calc_global_nohz(void)
{
long delta, active, n;

- /*
- * If we crossed a calc_load_update boundary, make sure to fold
- * any pending idle changes, the respective CPUs might have
- * missed the tick driven calc_load_account_active() update
- * due to NO_HZ.
- */
- delta = calc_load_fold_idle();
- if (delta)
- atomic_long_add(delta, &calc_load_tasks);
-
- /*
- * It could be the one fold was all it took, we done!
- */
- if (time_before(jiffies, calc_load_update + 10))
- return;
-
- /*
- * Catch-up, fold however many we are behind still
- */
- delta = jiffies - calc_load_update - 10;
- n = 1 + (delta / LOAD_FREQ);
+ if (!time_before(jiffies, calc_load_update + 10)) {
+ /*
+ * Catch-up, fold however many we are behind still
+ */
+ delta = jiffies - calc_load_update - 10;
+ n = 1 + (delta / LOAD_FREQ);

- active = atomic_long_read(&calc_load_tasks);
- active = active > 0 ? active * FIXED_1 : 0;
+ active = atomic_long_read(&calc_load_tasks);
+ active = active > 0 ? active * FIXED_1 : 0;

- avenrun[0] = calc_load_n(avenrun[0], EXP_1, active, n);
- avenrun[1] = calc_load_n(avenrun[1], EXP_5, active, n);
- avenrun[2] = calc_load_n(avenrun[2], EXP_15, active, n);
+ avenrun[0] = calc_load_n(avenrun[0], EXP_1, active, n);
+ avenrun[1] = calc_load_n(avenrun[1], EXP_5, active, n);
+ avenrun[2] = calc_load_n(avenrun[2], EXP_15, active, n);

- calc_load_update += n * LOAD_FREQ;
-}
-#else
-static void calc_load_account_idle(struct rq *this_rq)
-{
-}
+ calc_load_update += n * LOAD_FREQ;
+ }

-static inline long calc_load_fold_idle(void)
-{
- return 0;
+ /*
+ * Flip the idle index...
+ *
+ * Make sure we first write the new time then flip the index, so that
+ * calc_load_write_idx() will see the new time when it reads the new
+ * index, this avoids a double flip messing things up.
+ */
+ smp_wmb();
+ calc_load_idx++;
}
+#else /* !CONFIG_NO_HZ */

-static void calc_global_nohz(void)
-{
-}
-#endif
+static inline long calc_load_fold_idle(void) { return 0; }
+static inline void calc_global_nohz(void) { }

-/**
- * get_avenrun - get the load average array
- * @loads: pointer to dest load array
- * @offset: offset to add
- * @shift: shift count to shift the result left
- *
- * These values are estimates at best, so no need for locking.
- */
-void get_avenrun(unsigned long *loads, unsigned long offset, int shift)
-{
- loads[0] = (avenrun[0] + offset) << shift;
- loads[1] = (avenrun[1] + offset) << shift;
- loads[2] = (avenrun[2] + offset) << shift;
-}
+#endif /* CONFIG_NO_HZ */

/*
* calc_load - update the avenrun load estimates 10 ticks after the
@@ -3609,11 +3734,18 @@ void get_avenrun(unsigned long *loads, unsigned long offset, int shift)
*/
void calc_global_load(unsigned long ticks)
{
- long active;
+ long active, delta;

if (time_before(jiffies, calc_load_update + 10))
return;

+ /*
+ * Fold the 'old' idle-delta to include all NO_HZ cpus.
+ */
+ delta = calc_load_fold_idle();
+ if (delta)
+ atomic_long_add(delta, &calc_load_tasks);
+
active = atomic_long_read(&calc_load_tasks);
active = active > 0 ? active * FIXED_1 : 0;

@@ -3624,12 +3756,7 @@ void calc_global_load(unsigned long ticks)
calc_load_update += LOAD_FREQ;

/*
- * Account one period with whatever state we found before
- * folding in the nohz state and ageing the entire idle period.
- *
- * This avoids loosing a sample when we go idle between
- * calc_load_account_active() (10 ticks ago) and now and thus
- * under-accounting.
+ * In case we idled for multiple LOAD_FREQ intervals, catch up in bulk.
*/
calc_global_nohz();
}
@@ -3646,7 +3773,6 @@ static void calc_load_account_active(struct rq *this_rq)
return;

delta = calc_load_fold_active(this_rq);
- delta += calc_load_fold_idle();
if (delta)
atomic_long_add(delta, &calc_load_tasks);

@@ -3654,6 +3780,10 @@ static void calc_load_account_active(struct rq *this_rq)
}

/*
+ * End of global load-average stuff
+ */
+
+/*
* The exact cpuload at various idx values, calculated at every tick would be
* load = (2^idx - 1) / 2^idx * load + 1 / 2^idx * cur_load
*
diff --git a/kernel/sched_idletask.c b/kernel/sched_idletask.c
index 0a51882534ea..be92bfe39294 100644
--- a/kernel/sched_idletask.c
+++ b/kernel/sched_idletask.c
@@ -23,7 +23,6 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl
static struct task_struct *pick_next_task_idle(struct rq *rq)
{
schedstat_inc(rq, sched_goidle);
- calc_load_account_idle(rq);
return rq->idle;
}

diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index c9236404aba3..9955ebd7ab7d 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -430,6 +430,7 @@ void tick_nohz_stop_sched_tick(int inidle)
*/
if (!ts->tick_stopped) {
select_nohz_load_balancer(1);
+ calc_load_enter_idle();

ts->idle_tick = hrtimer_get_expires(&ts->sched_timer);
ts->tick_stopped = 1;
@@ -563,6 +564,7 @@ void tick_nohz_restart_sched_tick(void)
account_idle_ticks(ticks);
#endif

+ calc_load_exit_idle();
touch_softlockup_watchdog();
/*
* Cancel the scheduled timer and restore the tick
--
1.7.10.4
 
Old 07-19-2012, 09:12 AM
Lesław Kopeć
 
Default Bug#674153: High reported CPU load when idle

On 07/18/2012 01:25 AM, Jonathan Nieder wrote:
> Anders Boström wrote:
>
>> Starting with 3.2.17-1, the CPU load accounting is broken when the
>> computer is idle. The CPU load is reported as >0.50 when
>> idle. 3.2.16-1 don't suffer from this problem.
>>
>> Suspected patch is the upstream patch
>> "sched: Fix nohz load accounting -- again!"
>> commit 5e2d50da11f0e6ec3ce8fe658d7c83b0b4346c68 to 3.2 and
>> originating from c308b56b5398779cd3da0f62ab26b0453494c3d4 .
>
> Please test the attached patch against a 3.2.y kernel, for example
> following the instructions below:

Good news everyone. I have tested kernel 3.2.21 and the attached patch
(based on 5167e8d I presume) seems to be fixing all the load average
oddities. I've compiled following kernels:

* 3.2.21-hz (CONFIG_NO_HZ=n)
* 3.2.21-no-hz (CONFIG_NO_HZ=y)
* 3.2.21-no-hz-5167e8d (CONFIG_NO_HZ=y) + attached patch

The load reported by 3.2.21-hz and 3.2.21-no-hz-5167e8d is exactly the
same under different CPU usage. Without the patch the tickless kernel
tends to show lower load values than what you would expect.

I can't say much for the case when load is too high on an idle machine,
because I haven't been able to reproduce the problem in the first place.

To summarize: the bug is present in unpatched kernel and fixed by
applying the attached patch. No nasty side effects noticed.

--
Lesław Kopeć
 
Old 07-19-2012, 11:29 AM
Ben Hutchings
 
Default Bug#674153: High reported CPU load when idle

On Thu, 2012-07-19 at 11:12 +0200, Lesław Kopeć wrote:
> On 07/18/2012 01:25 AM, Jonathan Nieder wrote:
> > Anders Boström wrote:
> >
> >> Starting with 3.2.17-1, the CPU load accounting is broken when the
> >> computer is idle. The CPU load is reported as >0.50 when
> >> idle. 3.2.16-1 don't suffer from this problem.
> >>
> >> Suspected patch is the upstream patch
> >> "sched: Fix nohz load accounting -- again!"
> >> commit 5e2d50da11f0e6ec3ce8fe658d7c83b0b4346c68 to 3.2 and
> >> originating from c308b56b5398779cd3da0f62ab26b0453494c3d4 .
> >
> > Please test the attached patch against a 3.2.y kernel, for example
> > following the instructions below:
>
> Good news everyone. I have tested kernel 3.2.21 and the attached patch
> (based on 5167e8d I presume) seems to be fixing all the load average
> oddities. I've compiled following kernels:
>
> * 3.2.21-hz (CONFIG_NO_HZ=n)
> * 3.2.21-no-hz (CONFIG_NO_HZ=y)
> * 3.2.21-no-hz-5167e8d (CONFIG_NO_HZ=y) + attached patch
>
> The load reported by 3.2.21-hz and 3.2.21-no-hz-5167e8d is exactly the
> same under different CPU usage. Without the patch the tickless kernel
> tends to show lower load values than what you would expect.
>
> I can't say much for the case when load is too high on an idle machine,
> because I haven't been able to reproduce the problem in the first place.
>
> To summarize: the bug is present in unpatched kernel and fixed by
> applying the attached patch. No nasty side effects noticed.

This is in the review queue for Linux 3.2.24. I'm hesistant to apply it
until it's been through the stable review process (probably early next
week). But, if there's no objection to it there, it will end up in
Debian pretty soon.

Ben.

--
Ben Hutchings
DNRC Motto: I can please only one person per day.
Today is not your day. Tomorrow isn't looking good either.
 
Old 07-23-2012, 03:56 PM
"Doug Smythies"
 
Default Bug#674153: High reported CPU load when idle

> On 2012.07.17 16:26 -0700 Jonathan Nieder wrote:
>
>Please test the attached patch against a 3.2.y kernel, for example following the instructions below:
>
> 0. prerequisites:
>
> apt-get install git build-essential
>
> 1. get the kernel history, if you don't already have it:
>
> git clone
> git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
>
> 2. fetch point releases:
>
> cd linux
> git remote add stable
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
> git fetch stable
>
> 3. configure, build, test:
>
> git checkout stable/linux-3.2.y
> cp /boot/config-$(uname -r) .config; # current configuration
> scripts/config --disable DEBUG_INFO
> make localmodconfig; # optional: minimize configuration
> make deb-pkg; # optionally with -j<num> for parallel build
> dpkg -i ../<name of package>; # as root
> reboot
> ... test test test ...
>
> Hopefully that will reproduce the bug. So
>
> 4. try the patch:
>
> cd linux
> git am -3sc /path/to/the/patch
> dpkg -i ../<name of package>; # as root
> reboot
> ... test test test ...
>
> Hope that helps,
> Jonathan

Hi Jonathan,

Thanks for your instructions. I tried them, because
I have never successfully got and compiled the kernel
via GIT before.

Your instructions worked fine.

With only a superficial few hours on each kernel, I verified
The problem with kernel 3.2.23 and no problem with kernel
3.2.23 + patch.

Note: the majority of my testing (one or two hundred hours) were
Done with Ubuntu server edition 12.04, Kernel 3.2.0-24-generic
#39, and all three of:
556061b00c9f2fd6a5524b6bde823ef12f299ecf
5aaa0b7a2ed5b12692c9ffb5222182bd558d3146
5167e8d5417bf5c322a703d2927daec727ea40dd
manually backedited.

By the way this non-distro 3.2.23 kernel seems to run a lot faster
than the Ubuntu distro 3.2.0-24 kernel.

... Doug Smythies


--
To UNSUBSCRIBE, email to debian-kernel-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: http://lists.debian.org/000f01cd68eb$c446a5e0$4cd3f1a0$@net
 
Old 07-30-2012, 01:53 AM
Ben Hutchings
 
Default Bug#674153: High reported CPU load when idle

On Mon, 2012-07-23 at 08:56 -0700, Doug Smythies wrote:
[...]
> With only a superficial few hours on each kernel, I verified
> The problem with kernel 3.2.23 and no problem with kernel
> 3.2.23 + patch.
>
> Note: the majority of my testing (one or two hundred hours) were
> Done with Ubuntu server edition 12.04, Kernel 3.2.0-24-generic
> #39, and all three of:
> 556061b00c9f2fd6a5524b6bde823ef12f299ecf
> 5aaa0b7a2ed5b12692c9ffb5222182bd558d3146

I have queued these up to be reviewed for inclusion in Linux 3.2.25.

> 5167e8d5417bf5c322a703d2927daec727ea40dd

This was included in Linux 3.2.24.

Ben.

> manually backedited.
>
> By the way this non-distro 3.2.23 kernel seems to run a lot faster
> than the Ubuntu distro 3.2.0-24 kernel.

--
Ben Hutchings
It is impossible to make anything foolproof because fools are so ingenious.
 

Thread Tools




All times are GMT. The time now is 01:27 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright ©2007 - 2008, www.linux-archive.org