Because the _lock routines are faster and provide a better example to follow.

Signed-off-by: Roel Kluin <12o3l@tiscali.nl>
---
As suggested by Nick Piggin. To be applied after Daniels likeliness patches
and my previous likeliness-accounting-change-and-cleanup.patch,
The patch was checkpatch.pl, compile, sparse and run tested (uml).

diff --git a/lib/likely_prof.c b/lib/likely_prof.c
index c9a8d1d..0da3181 100644
--- a/lib/likely_prof.c
+++ b/lib/likely_prof.c
@@ -36,7 +36,7 @@ int do_check_likely(struct likeliness *likeliness, unsigned int ret)
* disable and it was a bit cleaner then using internal __raw
* spinlock calls.
*/
- if (!test_and_set_bit(0, &likely_lock)) {
+ if (!test_and_set_bit_lock(0, &likely_lock)) {
if (likeliness->label & LP_UNSEEN) {
likeliness->label &= (~LP_UNSEEN);
likeliness->next = likeliness_head;
@@ -44,8 +44,7 @@ int do_check_likely(struct likeliness *likeliness, unsigned int ret)
likeliness->caller = (unsigned long)
__builtin_return_address(0);
}
- smp_mb__before_clear_bit();
- clear_bit(0, &likely_lock);
+ clear_bit_unlock(0, &likely_lock);
}
}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/