* Nick Piggin wrote:

> Hi, don't know if you really like this patch or not, but it helped me
> out with a problem I recently had....
>
> Basically, when the kernel lock is held, then preempt_count underflow
> does not get detected until it is released which may be a long time
> (and arbitrarily, eg at different points it may be rescheduled). If
> the bkl is released at schedule, the resulting output is actually
> fairly cryptic...
>
> With any other lock that elevates preempt_count, it is illegal to
> schedule under it (which would get found pretty quickly). bkl allows
> scheduling with preempt_count elevated, which makes underflows hard to
> debug.
>
> Index: linux-2.6/kernel/sched.c
> ================================================== =================
> --- linux-2.6.orig/kernel/sched.c 2008-09-30 11:32:56.000000000 +1000
> +++ linux-2.6/kernel/sched.c 2008-09-30 11:38:18.000000000 +1000
> @@ -4305,7 +4305,7 @@ void __kprobes sub_preempt_count(int val
> /*
> * Underflow?
> */
> - if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
> + if (DEBUG_LOCKS_WARN_ON(val > preempt_count() - (!!kernel_locked())))
> return;


looks useful to me. This hardcodes a "BKL is tied to preempt-off"
assumption but that should be OK - when get rid of the BKL by turning it
into a plain mutex we have to remember to remove this.

applied the commit below to tip/core/locking, thanks Nick!

Peter, does it look good to you too?

Ingo

--------------->
From 7317d7b87edb41a9135e30be1ec3f7ef817c53dd Mon Sep 17 00:00:00 2001
From: Nick Piggin
Date: Tue, 30 Sep 2008 20:50:27 +1000
Subject: [PATCH] sched: improve preempt debugging

This patch helped me out with a problem I recently had....

Basically, when the kernel lock is held, then preempt_count underflow does not
get detected until it is released which may be a long time (and arbitrarily,
eg at different points it may be rescheduled). If the bkl is released at
schedule, the resulting output is actually fairly cryptic...

With any other lock that elevates preempt_count, it is illegal to schedule
under it (which would get found pretty quickly). bkl allows scheduling with
preempt_count elevated, which makes underflows hard to debug.

Signed-off-by: Ingo Molnar
---
kernel/sched.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 9889080..ec3bd1f 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4305,7 +4305,7 @@ void __kprobes sub_preempt_count(int val)
/*
* Underflow?
*/
- if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
+ if (DEBUG_LOCKS_WARN_ON(val > preempt_count() - (!!kernel_locked())))
return;
/*
* Is the spinlock portion underflowing?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/