[PATCH 00/35] cpumask: Replace cpumask_t with struct cpumask - Kernel

This is a discussion on [PATCH 00/35] cpumask: Replace cpumask_t with struct cpumask - Kernel ; We want to move cpumasks off the stack: no local decls, no passing by copy. We also don't want to allow assignment of them, so we can later partially allocate them (ie. not all NR_CPUS bits). Unfortunately, all the cpus_* ...

+ Reply to Thread
Page 2 of 4 FirstFirst 1 2 3 4 LastLast
Results 21 to 40 of 77

Thread: [PATCH 00/35] cpumask: Replace cpumask_t with struct cpumask

  1. [PATCH 06/35] cpumask: introduce struct cpumask. From: Rusty Russell <rusty@rustcorp.com.au>

    We want to move cpumasks off the stack: no local decls, no passing by
    copy. We also don't want to allow assignment of them, so we can later
    partially allocate them (ie. not all NR_CPUS bits).

    Unfortunately, all the cpus_* functions are written perversely to take
    cpumask_t not cpumask_t *; although they are in fact wrapper macros.
    This sets a bad example. Also, we want to eventually make cpumasks an
    undefined struct, so we can catch on-stack usage with a compile error.

    So we create a 'struct cpumask', typedef cpumask_t to it during the
    transition, and cleanup all the cpumask operators to be normal
    functions (cpus_ -> cpumask_). Note that two functions already use
    variants of the new names: they are fixed in the next patch.

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    include/linux/cpumask.h | 356 +++++++++++++++++++++++++-----------------------
    lib/cpumask.c | 2
    2 files changed, 191 insertions(+), 167 deletions(-)

    --- linux-2.6.28.orig/include/linux/cpumask.h
    +++ linux-2.6.28/include/linux/cpumask.h
    @@ -5,17 +5,20 @@
    * Cpumasks provide a bitmap suitable for representing the
    * set of CPU's in a system, one bit position per CPU number.
    *
    + * Old-style uses "cpumask_t", but new ops are "struct cpumask *";
    + * don't put "struct cpumask"s on the stack.
    + *
    * See detailed comments in the file linux/bitmap.h describing the
    * data type on which these cpumasks are based.
    *
    * For details of cpumask_scnprintf() and cpumask_parse_user(),
    - * see bitmap_scnprintf() and bitmap_parse_user() in lib/bitmap.c.
    - * For details of cpulist_scnprintf() and cpulist_parse(), see
    - * bitmap_scnlistprintf() and bitmap_parselist(), also in bitmap.c.
    - * For details of cpu_remap(), see bitmap_bitremap in lib/bitmap.c
    - * For details of cpus_remap(), see bitmap_remap in lib/bitmap.c.
    - * For details of cpus_onto(), see bitmap_onto in lib/bitmap.c.
    - * For details of cpus_fold(), see bitmap_fold in lib/bitmap.c.
    + * see bitmap_scnprintf() and bitmap_parse_user() in lib/bitmap.c.
    + * For details of cpulist_scnprintf() and cpulist_parse(),
    + * see bitmap_scnlistprintf() and bitmap_parselist(), in lib/bitmap.c.
    + * For details of cpumask_cpuremap(), see bitmap_bitremap in lib/bitmap.c
    + * For details of cpumask_remap(), see bitmap_remap in lib/bitmap.c.
    + * For details of cpumask_onto(), see bitmap_onto in lib/bitmap.c.
    + * For details of cpumask_fold(), see bitmap_fold in lib/bitmap.c.
    *
    * . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
    * Note: The alternate operations with the suffix "_nr" are used
    @@ -33,29 +36,29 @@
    *
    * The available cpumask operations are:
    *
    - * void cpu_set(cpu, mask) turn on bit 'cpu' in mask
    - * void cpu_clear(cpu, mask) turn off bit 'cpu' in mask
    - * void cpus_setall(mask) set all bits
    - * void cpus_clear(mask) clear all bits
    - * int cpu_isset(cpu, mask) true iff bit 'cpu' set in mask
    - * int cpu_test_and_set(cpu, mask) test and set bit 'cpu' in mask
    - *
    - * void cpus_and(dst, src1, src2) dst = src1 & src2 [intersection]
    - * void cpus_or(dst, src1, src2) dst = src1 | src2 [union]
    - * void cpus_xor(dst, src1, src2) dst = src1 ^ src2
    - * void cpus_andnot(dst, src1, src2) dst = src1 & ~src2
    - * void cpus_complement(dst, src) dst = ~src
    - *
    - * int cpus_equal(mask1, mask2) Does mask1 == mask2?
    - * int cpus_intersects(mask1, mask2) Do mask1 and mask2 intersect?
    - * int cpus_subset(mask1, mask2) Is mask1 a subset of mask2?
    - * int cpus_empty(mask) Is mask empty (no bits sets)?
    - * int cpus_full(mask) Is mask full (all bits sets)?
    - * int cpus_weight(mask) Hamming weigh - number of set bits
    - * int cpus_weight_nr(mask) Same using nr_cpu_ids instead of NR_CPUS
    + * void cpumask_set_cpu(cpu, mask) turn on bit 'cpu' in mask
    + * void cpumask_clear_cpu(cpu, mask) turn off bit 'cpu' in mask
    + * int cpumask_test_and_set_cpu(cpu, mask) test and set bit 'cpu' in mask
    + * int cpumask_test_cpu(cpu, mask) true iff bit 'cpu' set in mask
    + * void cpumask_setall(mask) set all bits
    + * void cpumask_clear(mask) clear all bits
    + *
    + * void cpumask_and(dst, src1, src2) dst = src1 & src2 [intersection]
    + * void cpumask_or(dst, src1, src2) dst = src1 | src2 [union]
    + * void cpumask_xor(dst, src1, src2) dst = src1 ^ src2
    + * void cpumask_andnot(dst, src1, src2) dst = src1 & ~src2
    + * void cpumask_complement(dst, src) dst = ~src
    + *
    + * int cpumask_equal(mask1, mask2) Does mask1 == mask2?
    + * int cpumask_intersects(mask1, mask2) Do mask1 and mask2 intersect?
    + * int cpumask_subset(mask1, mask2) Is mask1 a subset of mask2?
    + * int cpumask_empty(mask) Is mask empty (no bits sets)?
    + * int cpumask_full(mask) Is mask full (all bits sets)?
    + * int cpumask_weight(mask) Hamming weigh - number of set bits
    + * int cpumask_weight_nr(mask) Same using nr_cpu_ids instead of NR_CPUS
    *
    - * void cpus_shift_right(dst, src, n) Shift right
    - * void cpus_shift_left(dst, src, n) Shift left
    + * void cpumask_shift_right(dst, src, n) Shift right
    + * void cpumask_shift_left(dst, src, n) Shift left
    *
    * int first_cpu(mask) Number lowest set bit, or NR_CPUS
    * int next_cpu(cpu, mask) Next cpu past 'cpu', or NR_CPUS
    @@ -65,7 +68,7 @@
    * (can be used as an lvalue)
    * CPU_MASK_ALL Initializer - all bits set
    * CPU_MASK_NONE Initializer - no bits set
    - * unsigned long *cpus_addr(mask) Array of unsigned long's in mask
    + * unsigned long *cpumask_bits(mask) Array of unsigned long's in mask
    *
    * CPUMASK_ALLOC kmalloc's a structure that is a composite of many cpumask_t
    * variables, and CPUMASK_PTR provides pointers to each field.
    @@ -100,12 +103,12 @@
    *
    * int cpumask_scnprintf(buf, len, mask) Format cpumask for printing
    * int cpumask_parse_user(ubuf, ulen, mask) Parse ascii string as cpumask
    - * int cpulist_scnprintf(buf, len, mask) Format cpumask as list for printing
    - * int cpulist_parse(buf, map) Parse ascii string as cpulist
    + * int cpumask_scnprintf(buf, len, mask) Format cpumask as list for printing
    + * int cpumask_parse(buf, map) Parse ascii string as cpumask
    * int cpu_remap(oldbit, old, new) newbit = map(old, new)(oldbit)
    - * void cpus_remap(dst, src, old, new) *dst = map(old, new)(src)
    - * void cpus_onto(dst, orig, relmap) *dst = orig relative to relmap
    - * void cpus_fold(dst, orig, sz) dst bits = orig bits mod sz
    + * void cpumask_remap(dst, src, old, new) *dst = map(old, new)(src)
    + * void cpumask_onto(dst, orig, relmap) *dst = orig relative to relmap
    + * void cpumask_fold(dst, orig, sz) dst bits = orig bits mod sz
    *
    * for_each_cpu_mask(cpu, mask) for-loop cpu over mask using NR_CPUS
    * for_each_cpu_mask_nr(cpu, mask) for-loop cpu over mask using nr_cpu_ids
    @@ -139,131 +142,216 @@
    #include
    #include

    -typedef struct { DECLARE_BITMAP(bits, NR_CPUS); } cpumask_t;
    +struct cpumask {
    + DECLARE_BITMAP(bits, NR_CPUS);
    +};
    +#define cpumask_bits(maskp) ((maskp)->bits)
    +
    +/* Deprecated. */
    +typedef struct cpumask cpumask_t;
    extern cpumask_t _unused_cpumask_arg_;

    -#define cpu_set(cpu, dst) __cpu_set((cpu), &(dst))
    -static inline void __cpu_set(int cpu, volatile cpumask_t *dstp)
    +#define cpu_set(cpu, dst) cpumask_set_cpu((cpu), &(dst))
    +#define cpu_clear(cpu, dst) cpumask_clear_cpu((cpu), &(dst))
    +#define cpu_test_and_set(cpu, mask) cpumask_test_and_set_cpu((cpu), &(mask))
    +/* No static inline type checking - see Subtlety (1) above. */
    +#define cpu_isset(cpu, cpumask) test_bit((cpu), (cpumask).bits)
    +#define cpus_setall(dst) cpumask_setall(&(dst))
    +#define cpus_clear(dst) cpumask_clear(&(dst))
    +#define cpus_and(dst, src1, src2) cpumask_and(&(dst), &(src1), &(src2))
    +#define cpus_or(dst, src1, src2) cpumask_or(&(dst), &(src1), &(src2))
    +#define cpus_xor(dst, src1, src2) cpumask_xor(&(dst), &(src1), &(src2))
    +#define cpus_andnot(dst, src1, src2) \
    + cpumask_andnot(&(dst), &(src1), &(src2))
    +#define cpus_complement(dst, src) cpumask_complement(&(dst), &(src))
    +#define cpus_equal(src1, src2) cpumask_equal(&(src1), &(src2))
    +#define cpus_intersects(src1, src2) cpumask_intersects(&(src1), &(src2))
    +#define cpus_subset(src1, src2) cpumask_subset(&(src1), &(src2))
    +#define cpus_empty(src) cpumask_empty(&(src))
    +#define cpus_full(cpumask) cpumask_full(&(cpumask))
    +#define cpus_weight(cpumask) cpumask_weight(&(cpumask))
    +#define cpus_shift_right(dst, src, n) \
    + cpumask_shift_right(&(dst), &(src), (n))
    +#define cpus_shift_left(dst, src, n) \
    + cpumask_shift_left(&(dst), &(src), (n))
    +#define cpumask_scnprintf(buf, len, src) \
    + __cpumask_scnprintf((buf), (len), &(src))
    +#define cpumask_parse_user(ubuf, ulen, dst) \
    + __cpumask_parse_user((ubuf), (ulen), &(dst))
    +#define cpulist_scnprintf(buf, len, src) \
    + __cpulist_scnprintf((buf), (len), &(src))
    +#define cpulist_parse(buf, dst) __cpulist_parse((buf), &(dst))
    +#define cpu_remap(oldbit, old, new) \
    + cpumask_cpuremap((oldbit), &(old), &(new))
    +#define cpus_remap(dst, src, old, new) \
    + cpumask_remap(&(dst), &(src), &(old), &(new))
    +#define cpus_onto(dst, orig, relmap) \
    + cpumask_onto(&(dst), &(orig), &(relmap))
    +#define cpus_fold(dst, orig, sz) \
    + cpumask_fold(&(dst), &(orig), sz)
    +#define cpus_addr(src) ((src).bits)
    +/* End deprecated region. */
    +
    +static inline void cpumask_set_cpu(int cpu, volatile struct cpumask *dstp)
    {
    set_bit(cpu, dstp->bits);
    }

    -#define cpu_clear(cpu, dst) __cpu_clear((cpu), &(dst))
    -static inline void __cpu_clear(int cpu, volatile cpumask_t *dstp)
    +static inline void cpumask_clear_cpu(int cpu, volatile struct cpumask *dstp)
    {
    clear_bit(cpu, dstp->bits);
    }

    -#define cpus_setall(dst) __cpus_setall(&(dst), NR_CPUS)
    -static inline void __cpus_setall(cpumask_t *dstp, int nbits)
    +/* No static inline type checking - see Subtlety (1) above. */
    +#define cpumask_test_cpu(cpu, cpumask) test_bit((cpu), (cpumask)->bits)
    +
    +static inline int cpumask_test_and_set_cpu(int cpu, struct cpumask *addr)
    {
    - bitmap_fill(dstp->bits, nbits);
    + return test_and_set_bit(cpu, addr->bits);
    }

    -#define cpus_clear(dst) __cpus_clear(&(dst), NR_CPUS)
    -static inline void __cpus_clear(cpumask_t *dstp, int nbits)
    +static inline void cpumask_setall(struct cpumask *dstp)
    {
    - bitmap_zero(dstp->bits, nbits);
    + bitmap_fill(dstp->bits, NR_CPUS);
    }

    -/* No static inline type checking - see Subtlety (1) above. */
    -#define cpu_isset(cpu, cpumask) test_bit((cpu), (cpumask).bits)
    -
    -#define cpu_test_and_set(cpu, cpumask) __cpu_test_and_set((cpu), &(cpumask))
    -static inline int __cpu_test_and_set(int cpu, cpumask_t *addr)
    +static inline void cpumask_clear(struct cpumask *dstp)
    {
    - return test_and_set_bit(cpu, addr->bits);
    + bitmap_zero(dstp->bits, NR_CPUS);
    }

    -#define cpus_and(dst, src1, src2) __cpus_and(&(dst), &(src1), &(src2), NR_CPUS)
    -static inline void __cpus_and(cpumask_t *dstp, const cpumask_t *src1p,
    - const cpumask_t *src2p, int nbits)
    +static inline void cpumask_and(struct cpumask *dstp,
    + const struct cpumask *src1p,
    + const struct cpumask *src2p)
    {
    - bitmap_and(dstp->bits, src1p->bits, src2p->bits, nbits);
    + bitmap_and(dstp->bits, src1p->bits, src2p->bits, NR_CPUS);
    }

    -#define cpus_or(dst, src1, src2) __cpus_or(&(dst), &(src1), &(src2), NR_CPUS)
    -static inline void __cpus_or(cpumask_t *dstp, const cpumask_t *src1p,
    - const cpumask_t *src2p, int nbits)
    +static inline void cpumask_or(struct cpumask *dstp, const struct cpumask *src1p,
    + const struct cpumask *src2p)
    {
    - bitmap_or(dstp->bits, src1p->bits, src2p->bits, nbits);
    + bitmap_or(dstp->bits, src1p->bits, src2p->bits, NR_CPUS);
    }

    -#define cpus_xor(dst, src1, src2) __cpus_xor(&(dst), &(src1), &(src2), NR_CPUS)
    -static inline void __cpus_xor(cpumask_t *dstp, const cpumask_t *src1p,
    - const cpumask_t *src2p, int nbits)
    +static inline void cpumask_xor(struct cpumask *dstp,
    + const struct cpumask *src1p,
    + const struct cpumask *src2p)
    {
    - bitmap_xor(dstp->bits, src1p->bits, src2p->bits, nbits);
    + bitmap_xor(dstp->bits, src1p->bits, src2p->bits, NR_CPUS);
    }

    -#define cpus_andnot(dst, src1, src2) \
    - __cpus_andnot(&(dst), &(src1), &(src2), NR_CPUS)
    -static inline void __cpus_andnot(cpumask_t *dstp, const cpumask_t *src1p,
    - const cpumask_t *src2p, int nbits)
    +static inline void cpumask_andnot(struct cpumask *dstp,
    + const struct cpumask *src1p,
    + const struct cpumask *src2p)
    {
    - bitmap_andnot(dstp->bits, src1p->bits, src2p->bits, nbits);
    + bitmap_andnot(dstp->bits, src1p->bits, src2p->bits, NR_CPUS);
    }

    -#define cpus_complement(dst, src) __cpus_complement(&(dst), &(src), NR_CPUS)
    -static inline void __cpus_complement(cpumask_t *dstp,
    - const cpumask_t *srcp, int nbits)
    +static inline void cpumask_complement(struct cpumask *dstp,
    + const struct cpumask *srcp)
    {
    - bitmap_complement(dstp->bits, srcp->bits, nbits);
    + bitmap_complement(dstp->bits, srcp->bits, NR_CPUS);
    }

    -#define cpus_equal(src1, src2) __cpus_equal(&(src1), &(src2), NR_CPUS)
    -static inline int __cpus_equal(const cpumask_t *src1p,
    - const cpumask_t *src2p, int nbits)
    +static inline int cpumask_equal(const struct cpumask *src1p,
    + const struct cpumask *src2p)
    {
    - return bitmap_equal(src1p->bits, src2p->bits, nbits);
    + return bitmap_equal(src1p->bits, src2p->bits, NR_CPUS);
    }

    -#define cpus_intersects(src1, src2) __cpus_intersects(&(src1), &(src2), NR_CPUS)
    -static inline int __cpus_intersects(const cpumask_t *src1p,
    - const cpumask_t *src2p, int nbits)
    +static inline int cpumask_intersects(const struct cpumask *src1p,
    + const struct cpumask *src2p)
    {
    - return bitmap_intersects(src1p->bits, src2p->bits, nbits);
    + return bitmap_intersects(src1p->bits, src2p->bits, NR_CPUS);
    }

    -#define cpus_subset(src1, src2) __cpus_subset(&(src1), &(src2), NR_CPUS)
    -static inline int __cpus_subset(const cpumask_t *src1p,
    - const cpumask_t *src2p, int nbits)
    +static inline int cpumask_subset(const struct cpumask *src1p,
    + const struct cpumask *src2p)
    {
    - return bitmap_subset(src1p->bits, src2p->bits, nbits);
    + return bitmap_subset(src1p->bits, src2p->bits, NR_CPUS);
    }

    -#define cpus_empty(src) __cpus_empty(&(src), NR_CPUS)
    -static inline int __cpus_empty(const cpumask_t *srcp, int nbits)
    +static inline int cpumask_empty(const struct cpumask *srcp)
    {
    - return bitmap_empty(srcp->bits, nbits);
    + return bitmap_empty(srcp->bits, NR_CPUS);
    }

    -#define cpus_full(cpumask) __cpus_full(&(cpumask), NR_CPUS)
    -static inline int __cpus_full(const cpumask_t *srcp, int nbits)
    +static inline int cpumask_full(const struct cpumask *srcp)
    {
    - return bitmap_full(srcp->bits, nbits);
    + return bitmap_full(srcp->bits, NR_CPUS);
    }

    -#define cpus_weight(cpumask) __cpus_weight(&(cpumask), NR_CPUS)
    static inline int __cpus_weight(const cpumask_t *srcp, int nbits)
    {
    return bitmap_weight(srcp->bits, nbits);
    }

    -#define cpus_shift_right(dst, src, n) \
    - __cpus_shift_right(&(dst), &(src), (n), NR_CPUS)
    -static inline void __cpus_shift_right(cpumask_t *dstp,
    - const cpumask_t *srcp, int n, int nbits)
    +static inline int cpumask_weight(const struct cpumask *srcp)
    {
    - bitmap_shift_right(dstp->bits, srcp->bits, n, nbits);
    + return bitmap_weight(srcp->bits, NR_CPUS);
    }

    -#define cpus_shift_left(dst, src, n) \
    - __cpus_shift_left(&(dst), &(src), (n), NR_CPUS)
    -static inline void __cpus_shift_left(cpumask_t *dstp,
    - const cpumask_t *srcp, int n, int nbits)
    +static inline void cpumask_shift_right(struct cpumask *dstp,
    + const struct cpumask *srcp, int n)
    +{
    + bitmap_shift_right(dstp->bits, srcp->bits, n, NR_CPUS);
    +}
    +
    +static inline void cpumask_shift_left(struct cpumask *dstp,
    + const struct cpumask *srcp, int n)
    +{
    + bitmap_shift_left(dstp->bits, srcp->bits, n, NR_CPUS);
    +}
    +
    +static inline int __cpumask_scnprintf(char *buf, int len,
    + const struct cpumask *srcp)
    +{
    + return bitmap_scnprintf(buf, len, srcp->bits, NR_CPUS);
    +}
    +
    +static inline int __cpumask_parse_user(const char __user *buf, int len,
    + struct cpumask *dstp)
    +{
    + return bitmap_parse_user(buf, len, dstp->bits, NR_CPUS);
    +}
    +
    +static inline int __cpulist_scnprintf(char *buf, int len,
    + const struct cpumask *srcp)
    +{
    + return bitmap_scnlistprintf(buf, len, srcp->bits, NR_CPUS);
    +}
    +
    +static inline int __cpulist_parse(const char *buf, struct cpumask *dstp)
    +{
    + return bitmap_parselist(buf, dstp->bits, NR_CPUS);
    +}
    +
    +static inline int cpumask_cpuremap(int oldbit,
    + const struct cpumask *oldp,
    + const struct cpumask *newp)
    +{
    + return bitmap_bitremap(oldbit, oldp->bits, newp->bits, NR_CPUS);
    +}
    +
    +static inline void cpumask_remap(struct cpumask *dstp,
    + const struct cpumask *srcp,
    + const struct cpumask *oldp,
    + const struct cpumask *newp)
    +{
    + bitmap_remap(dstp->bits, srcp->bits, oldp->bits, newp->bits, NR_CPUS);
    +}
    +
    +static inline void cpumask_onto(struct cpumask *dstp,
    + const struct cpumask *origp,
    + const struct cpumask *relmapp)
    +{
    + bitmap_onto(dstp->bits, origp->bits, relmapp->bits, NR_CPUS);
    +}
    +
    +static inline void cpumask_fold(struct cpumask *dstp,
    + const struct cpumask *origp, int sz)
    {
    - bitmap_shift_left(dstp->bits, srcp->bits, n, nbits);
    + bitmap_fold(dstp->bits, origp->bits, sz, NR_CPUS);
    }

    /*
    @@ -326,8 +414,6 @@ extern cpumask_t cpu_mask_all;
    [0] = 1UL \
    } }

    -#define cpus_addr(src) ((src).bits)
    -
    #if NR_CPUS > BITS_PER_LONG
    #define CPUMASK_ALLOC(m) struct m *m = kmalloc(sizeof(*m), GFP_KERNEL)
    #define CPUMASK_FREE(m) kfree(m)
    @@ -337,68 +423,6 @@ extern cpumask_t cpu_mask_all;
    #endif
    #define CPUMASK_PTR(v, m) cpumask_t *v = &(m->v)

    -#define cpumask_scnprintf(buf, len, src) \
    - __cpumask_scnprintf((buf), (len), &(src), NR_CPUS)
    -static inline int __cpumask_scnprintf(char *buf, int len,
    - const cpumask_t *srcp, int nbits)
    -{
    - return bitmap_scnprintf(buf, len, srcp->bits, nbits);
    -}
    -
    -#define cpumask_parse_user(ubuf, ulen, dst) \
    - __cpumask_parse_user((ubuf), (ulen), &(dst), NR_CPUS)
    -static inline int __cpumask_parse_user(const char __user *buf, int len,
    - cpumask_t *dstp, int nbits)
    -{
    - return bitmap_parse_user(buf, len, dstp->bits, nbits);
    -}
    -
    -#define cpulist_scnprintf(buf, len, src) \
    - __cpulist_scnprintf((buf), (len), &(src), NR_CPUS)
    -static inline int __cpulist_scnprintf(char *buf, int len,
    - const cpumask_t *srcp, int nbits)
    -{
    - return bitmap_scnlistprintf(buf, len, srcp->bits, nbits);
    -}
    -
    -#define cpulist_parse(buf, dst) __cpulist_parse((buf), &(dst), NR_CPUS)
    -static inline int __cpulist_parse(const char *buf, cpumask_t *dstp, int nbits)
    -{
    - return bitmap_parselist(buf, dstp->bits, nbits);
    -}
    -
    -#define cpu_remap(oldbit, old, new) \
    - __cpu_remap((oldbit), &(old), &(new), NR_CPUS)
    -static inline int __cpu_remap(int oldbit,
    - const cpumask_t *oldp, const cpumask_t *newp, int nbits)
    -{
    - return bitmap_bitremap(oldbit, oldp->bits, newp->bits, nbits);
    -}
    -
    -#define cpus_remap(dst, src, old, new) \
    - __cpus_remap(&(dst), &(src), &(old), &(new), NR_CPUS)
    -static inline void __cpus_remap(cpumask_t *dstp, const cpumask_t *srcp,
    - const cpumask_t *oldp, const cpumask_t *newp, int nbits)
    -{
    - bitmap_remap(dstp->bits, srcp->bits, oldp->bits, newp->bits, nbits);
    -}
    -
    -#define cpus_onto(dst, orig, relmap) \
    - __cpus_onto(&(dst), &(orig), &(relmap), NR_CPUS)
    -static inline void __cpus_onto(cpumask_t *dstp, const cpumask_t *origp,
    - const cpumask_t *relmapp, int nbits)
    -{
    - bitmap_onto(dstp->bits, origp->bits, relmapp->bits, nbits);
    -}
    -
    -#define cpus_fold(dst, orig, sz) \
    - __cpus_fold(&(dst), &(orig), sz, NR_CPUS)
    -static inline void __cpus_fold(cpumask_t *dstp, const cpumask_t *origp,
    - int sz, int nbits)
    -{
    - bitmap_fold(dstp->bits, origp->bits, sz, nbits);
    -}
    -
    #if NR_CPUS == 1

    #define nr_cpu_ids 1
    --- linux-2.6.28.orig/lib/cpumask.c
    +++ linux-2.6.28/lib/cpumask.c
    @@ -18,7 +18,7 @@ EXPORT_SYMBOL(__next_cpu);
    int cpumask_next_and(int n, const cpumask_t *srcp, const cpumask_t *andp)
    {
    while ((n = next_cpu_nr(n, *srcp)) < nr_cpu_ids)
    - if (cpu_isset(n, *andp))
    + if (cpumask_test_cpu(n, andp))
    break;
    return n;
    }

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. [PATCH 31/35] cpumask: reorder header to minimize separate #ifdefs From: Rusty Russell <rusty@rustcorp.com.au>

    cpumask.h is pretty chaotic. Now we've replaced most of it, let's
    group things together a bit better.

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    include/linux/cpumask.h | 166 +++++++++++++++++++++---------------------------
    1 file changed, 76 insertions(+), 90 deletions(-)

    --- linux-2.6.28.orig/include/linux/cpumask.h
    +++ linux-2.6.28/include/linux/cpumask.h
    @@ -110,10 +110,82 @@ struct cpumask {
    };
    #define cpumask_bits(maskp) ((maskp)->bits)

    -#define cpumask_size() (BITS_TO_LONGS(nr_cpumask_bits) * sizeof(long))
    +/* Deprecated: use struct cpumask *, or cpumask_var_t. */
    +typedef struct cpumask cpumask_t;
    +
    +#if CONFIG_NR_CPUS == 1
    +/* Uniprocesor. */
    +#define cpumask_first(src) ({ (void)(src); 0; })
    +#define cpumask_next(n, src) ({ (void)(src); 1; })
    +#define cpumask_next_and(n, srcp, andp) ({ (void)(srcp), (void)(andp); 1; })
    +#define cpumask_any_but(mask, cpu) ({ (void)(mask); (void)(cpu); 0; })
    +
    +#define for_each_cpu(cpu, mask) \
    + for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
    +#define for_each_cpu_and(cpu, mask, and) \
    + for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask, (void)and)
    +
    +#define num_online_cpus() 1
    +#define num_possible_cpus() 1
    +#define num_present_cpus() 1
    +#define cpu_online(cpu) ((cpu) == 0)
    +#define cpu_possible(cpu) ((cpu) == 0)
    +#define cpu_present(cpu) ((cpu) == 0)
    +#define cpu_active(cpu) ((cpu) == 0)
    +#define nr_cpu_ids 1
    +#else
    +/* SMP */
    +extern int nr_cpu_ids;
    +
    +int cpumask_first(const cpumask_t *srcp);
    +int cpumask_next(int n, const cpumask_t *srcp);
    +int cpumask_next_and(int n, const cpumask_t *srcp, const cpumask_t *andp);
    +int cpumask_any_but(const struct cpumask *mask, unsigned int cpu);
    +
    +#define for_each_cpu(cpu, mask) \
    + for ((cpu) = -1; \
    + (cpu) = cpumask_next((cpu), (mask)), \
    + (cpu) < nr_cpu_ids
    +#define for_each_cpu_and(cpu, mask, and) \
    + for ((cpu) = -1; \
    + (cpu) = cpumask_next_and((cpu), (mask), (and)), \
    + (cpu) < nr_cpu_ids
    +
    +#define num_online_cpus() cpus_weight(cpu_online_map)
    +#define num_possible_cpus() cpus_weight(cpu_possible_map)
    +#define num_present_cpus() cpus_weight(cpu_present_map)
    +#define cpu_online(cpu) cpu_isset((cpu), cpu_online_map)
    +#define cpu_possible(cpu) cpu_isset((cpu), cpu_possible_map)
    +#define cpu_present(cpu) cpu_isset((cpu), cpu_present_map)
    +#define cpu_active(cpu) cpu_isset((cpu), cpu_active_map)
    +#endif /* SMP */
    +
    +#if CONFIG_NR_CPUS <= BITS_PER_LONG
    +#define CPU_BITS_ALL \
    +{ \
    + [BITS_TO_LONGS(CONFIG_NR_CPUS)-1] = CPU_MASK_LAST_WORD \
    +}
    +
    +/* This produces more efficient code. */
    +#define nr_cpumask_bits NR_CPUS
    +
    +#else /* CONFIG_NR_CPUS > BITS_PER_LONG */
    +
    +#define CPU_BITS_ALL \
    +{ \
    + [0 ... BITS_TO_LONGS(CONFIG_NR_CPUS)-2] = ~0UL, \
    + [BITS_TO_LONGS(CONFIG_NR_CPUS)-1] = CPU_MASK_LAST_WORD \
    +}
    +
    +#define nr_cpumask_bits nr_cpu_ids
    +#endif /* CONFIG_NR_CPUS > BITS_PER_LONG */
    +
    +static inline size_t cpumask_size(void)
    +{
    + return BITS_TO_LONGS(nr_cpumask_bits) * sizeof(long);
    +}

    /* Deprecated. */
    -typedef struct cpumask cpumask_t;
    extern cpumask_t _unused_cpumask_arg_;

    #define CPU_MASK_ALL_PTR (cpu_all_mask)
    @@ -178,21 +250,7 @@ extern cpumask_t _unused_cpumask_arg_;
    #define cpu_mask_all (*(cpumask_t *)cpu_all_mask)
    /* End deprecated region. */

    -#if NR_CPUS > 1
    -/* Starts at NR_CPUS until we know better. */
    -extern int nr_cpu_ids;
    -#else
    -#define nr_cpu_ids NR_CPUS
    -#endif
    -
    -/* The number of bits to hand to the bitmask ops. */
    -#if NR_CPUS <= BITS_PER_LONG
    -/* This produces more efficient code. */
    -#define nr_cpumask_bits NR_CPUS
    -#else
    -#define nr_cpumask_bits nr_cpu_ids
    -#endif
    -
    +/* cpumask_* operators */
    static inline void cpumask_set_cpu(int cpu, volatile struct cpumask *dstp)
    {
    set_bit(cpu, cpumask_bits(dstp));
    @@ -407,24 +465,7 @@ static inline const struct cpumask *cpum
    return (const struct cpumask *)p;
    }

    -#define CPU_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(NR_CPUS)
    -
    -#if NR_CPUS <= BITS_PER_LONG
    -
    -#define CPU_BITS_ALL \
    -{ \
    - [BITS_TO_LONGS(CONFIG_NR_CPUS)-1] = CPU_MASK_LAST_WORD \
    -}
    -
    -#else
    -
    -#define CPU_BITS_ALL \
    -{ \
    - [0 ... BITS_TO_LONGS(CONFIG_NR_CPUS)-2] = ~0UL, \
    - [BITS_TO_LONGS(CONFIG_NR_CPUS)-1] = CPU_MASK_LAST_WORD \
    -}
    -
    -#endif
    +#define CPU_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(CONFIG_NR_CPUS)

    #define CPU_BITS_NONE \
    { \
    @@ -436,43 +477,6 @@ static inline const struct cpumask *cpum
    [0] = 1UL \
    }

    -#if NR_CPUS == 1
    -
    -#define cpumask_first(src) ({ (void)(src); 0; })
    -#define cpumask_next(n, src) ({ (void)(src); 1; })
    -#define cpumask_next_and(n, srcp, andp) ({ (void)(srcp), (void)(andp); 1; })
    -#define cpumask_any_but(mask, cpu) ({ (void)(mask); (void)(cpu); 0; })
    -
    -#define for_each_cpu(cpu, mask) \
    - for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
    -#define for_each_cpu_and(cpu, mask, and) \
    - for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask, (void)and)
    -
    -#else /* NR_CPUS > 1 */
    -
    -int cpumask_first(const cpumask_t *srcp);
    -int cpumask_next(int n, const cpumask_t *srcp);
    -int cpumask_next_and(int n, const cpumask_t *srcp, const cpumask_t *andp);
    -int cpumask_any_but(const struct cpumask *mask, unsigned int cpu);
    -
    -#define for_each_cpu(cpu, mask) \
    - for ((cpu) = -1; \
    - (cpu) = cpumask_next((cpu), (mask)), \
    - (cpu) < nr_cpu_ids
    -#define for_each_cpu_and(cpu, mask, and) \
    - for ((cpu) = -1; \
    - (cpu) = cpumask_next_and((cpu), (mask), (and)), \
    - (cpu) < nr_cpu_ids
    -
    -#define num_online_cpus() cpus_weight(cpu_online_map)
    -#define num_possible_cpus() cpus_weight(cpu_possible_map)
    -#define num_present_cpus() cpus_weight(cpu_present_map)
    -#define cpu_online(cpu) cpu_isset((cpu), cpu_online_map)
    -#define cpu_possible(cpu) cpu_isset((cpu), cpu_possible_map)
    -#define cpu_present(cpu) cpu_isset((cpu), cpu_present_map)
    -#define cpu_active(cpu) cpu_isset((cpu), cpu_active_map)
    -#endif /* NR_CPUS */
    -
    #define cpumask_first_and(mask, and) cpumask_next_and(-1, (mask), (and))

    /*
    @@ -564,24 +568,6 @@ extern const DECLARE_BITMAP(cpu_all_bits
    /* First bits of cpu_bit_bitmap are in fact unset. */
    #define cpu_none_mask to_cpumask(cpu_bit_bitmap[0])

    -#if NR_CPUS > 1
    -#define num_online_cpus() cpus_weight(cpu_online_map)
    -#define num_possible_cpus() cpus_weight(cpu_possible_map)
    -#define num_present_cpus() cpus_weight(cpu_present_map)
    -#define cpu_online(cpu) cpu_isset((cpu), cpu_online_map)
    -#define cpu_possible(cpu) cpu_isset((cpu), cpu_possible_map)
    -#define cpu_present(cpu) cpu_isset((cpu), cpu_present_map)
    -#define cpu_active(cpu) cpu_isset((cpu), cpu_active_map)
    -#else
    -#define num_online_cpus() 1
    -#define num_possible_cpus() 1
    -#define num_present_cpus() 1
    -#define cpu_online(cpu) ((cpu) == 0)
    -#define cpu_possible(cpu) ((cpu) == 0)
    -#define cpu_present(cpu) ((cpu) == 0)
    -#define cpu_active(cpu) ((cpu) == 0)
    -#endif
    -
    /* Wrappers to manipulate otherwise-constant masks. */
    void set_cpu_possible(unsigned int cpu, bool possible);
    void set_cpu_present(unsigned int cpu, bool present);

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. [PATCH 10/35] cpumask: introduce cpumask_var_t for local cpumask vars From: Rusty Russell <rusty@rustcorp.com.au>

    We want to move cpumasks off the stack: no local decls, no passing by
    copy. Linus suggested we use the array-or-pointer trick for on-stack
    vars; we introduce a new cpumask_var_t for this.

    Rather than pick an arbitrary limit, I chose a new config option so
    arch maintainers can decide where their threshold is.

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    include/linux/cpumask.h | 36 ++++++++++++++++++++++++++++++++++++
    kernel/Kconfig.preempt | 10 ++++++++++
    lib/cpumask.c | 31 +++++++++++++++++++++++++++++++
    3 files changed, 77 insertions(+)

    --- linux-2.6.28.orig/include/linux/cpumask.h
    +++ linux-2.6.28/include/linux/cpumask.h
    @@ -485,6 +485,42 @@ int __next_cpu_nr(int n, const cpumask_t
    #endif /* NR_CPUS > 64 */

    /*
    + * cpumask_var_t: struct cpumask for stack usage.
    + *
    + * Oh, the wicked games we play! In order to make kernel coding a
    + * little more difficult, we typedef cpumask_var_t to an array or a
    + * pointer: doing &mask on an array is a noop, so it still works.
    + *
    + * ie.
    + * cpumask_var_t tmpmask;
    + * if (!alloc_cpumask_var(&tmpmask, GFP_KERNEL))
    + * return -ENOMEM;
    + *
    + * ... use 'tmpmask' like a normal struct cpumask * ...
    + *
    + * free_cpumask_var(tmpmask);
    + */
    +#ifdef CONFIG_CPUMASK_OFFSTACK
    +typedef struct cpumask *cpumask_var_t;
    +
    +bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags);
    +void free_cpumask_var(cpumask_var_t mask);
    +
    +#else
    +typedef struct cpumask cpumask_var_t[1];
    +
    +static inline bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags)
    +{
    + return true;
    +}
    +
    +static inline void free_cpumask_var(cpumask_var_t mask)
    +{
    +}
    +
    +#endif /* CONFIG_CPUMASK_OFFSTACK */
    +
    +/*
    * The following particular system cpumasks and operations manage
    * possible, present, active and online cpus. Each of them is a fixed size
    * bitmap of size NR_CPUS.
    --- linux-2.6.28.orig/kernel/Kconfig.preempt
    +++ linux-2.6.28/kernel/Kconfig.preempt
    @@ -77,3 +77,13 @@ config RCU_TRACE

    Say Y here if you want to enable RCU tracing
    Say N if you are unsure.
    +
    +# FIXME - does not need to be in this Kconfig file but putting it here puts it
    +# on the processors menu. To fix means changing all arch Kconfig's.
    +config CPUMASK_OFFSTACK
    + bool
    + prompt "Force CPU masks off stack" if DEBUG_PER_CPU_MAPS
    + help
    + Use dynamic allocation for cpumask_var_t, instead of putting
    + them on the stack. This is a bit more expensive, but avoids
    + stack overflow.
    --- linux-2.6.28.orig/lib/cpumask.c
    +++ linux-2.6.28/lib/cpumask.c
    @@ -43,3 +43,34 @@ int __any_online_cpu(const cpumask_t *ma
    return cpu;
    }
    EXPORT_SYMBOL(__any_online_cpu);
    +
    +/* These are not inline because of header tangles. */
    +#ifdef CONFIG_CPUMASK_OFFSTACK
    +bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags)
    +{
    + if (likely(slab_is_available()))
    + *mask = kmalloc(cpumask_size(), flags);
    + else {
    +#ifdef CONFIG_DEBUG_PER_CPU_MAPS
    + printk(KERN_ERR
    + "=> alloc_cpumask_var: kmalloc not available!\n");
    + dump_stack();
    +#endif
    + *mask = NULL;
    + }
    +#ifdef CONFIG_DEBUG_PER_CPU_MAPS
    + if (!*mask) {
    + printk(KERN_ERR "=> alloc_cpumask_var: failed!\n");
    + dump_stack();
    + }
    +#endif
    + return *mask != NULL;
    +}
    +EXPORT_SYMBOL(alloc_cpumask_var);
    +
    +void free_cpumask_var(cpumask_var_t mask)
    +{
    + kfree(mask);
    +}
    +EXPORT_SYMBOL(free_cpumask_var);
    +#endif

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  4. [PATCH 04/35] cpumask: centralize cpu_online_map and cpu_possible_map

    Each SMP arch defines these themselves. Move them to a central
    location.

    Two twists:
    1) Some archs set possible_map to all 1, so we add a
    CONFIG_INIT_ALL_POSSIBLE for this rather than break them.

    2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'.
    Those archs simply have phys_cpu_present_map replaced everywhere.

    Signed-off-by: Rusty Russell

    ---
    arch/alpha/kernel/smp.c | 5 -----
    arch/arm/kernel/smp.c | 10 ----------
    arch/cris/arch-v32/kernel/smp.c | 4 ----
    arch/ia64/kernel/smpboot.c | 6 ------
    arch/m32r/Kconfig | 1 +
    arch/m32r/kernel/smpboot.c | 6 ------
    arch/mips/include/asm/smp.h | 3 ---
    arch/mips/kernel/smp-cmp.c | 2 +-
    arch/mips/kernel/smp-mt.c | 2 +-
    arch/mips/kernel/smp.c | 7 +------
    arch/mips/kernel/smtc.c | 6 +++---
    arch/mips/pmc-sierra/yosemite/smp.c | 6 +++---
    arch/mips/sgi-ip27/ip27-smp.c | 2 +-
    arch/mips/sibyte/bcm1480/smp.c | 8 ++++----
    arch/mips/sibyte/sb1250/smp.c | 8 ++++----
    arch/parisc/Kconfig | 1 +
    arch/parisc/kernel/smp.c | 15 ---------------
    arch/powerpc/kernel/smp.c | 4 ----
    arch/s390/Kconfig | 1 +
    arch/s390/kernel/smp.c | 6 ------
    arch/sh/kernel/smp.c | 6 ------
    arch/sparc/include/asm/smp_32.h | 2 --
    arch/sparc/kernel/smp.c | 6 ++----
    arch/sparc/kernel/sparc_ksyms.c | 4 ----
    arch/sparc64/kernel/smp.c | 4 ----
    arch/um/kernel/smp.c | 7 -------
    arch/x86/kernel/smpboot.c | 6 ------
    arch/x86/mach-voyager/voyager_smp.c | 7 -------
    init/Kconfig | 9 +++++++++
    kernel/cpu.c | 11 ++++++-----
    30 files changed, 38 insertions(+), 127 deletions(-)

    --- linux-2.6.28.orig/arch/alpha/kernel/smp.c
    +++ linux-2.6.28/arch/alpha/kernel/smp.c
    @@ -70,11 +70,6 @@ enum ipi_message_type {
    /* Set to a secondary's cpuid when it comes online. */
    static int smp_secondary_alive __devinitdata = 0;

    -/* Which cpus ids came online. */
    -cpumask_t cpu_online_map;
    -
    -EXPORT_SYMBOL(cpu_online_map);
    -
    int smp_num_probed; /* Internal processor count */
    int smp_num_cpus = 1; /* Number that came online. */
    EXPORT_SYMBOL(smp_num_cpus);
    --- linux-2.6.28.orig/arch/arm/kernel/smp.c
    +++ linux-2.6.28/arch/arm/kernel/smp.c
    @@ -34,16 +34,6 @@
    #include

    /*
    - * bitmask of present and online CPUs.
    - * The present bitmask indicates that the CPU is physically present.
    - * The online bitmask indicates that the CPU is up and running.
    - */
    -cpumask_t cpu_possible_map;
    -EXPORT_SYMBOL(cpu_possible_map);
    -cpumask_t cpu_online_map;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    -/*
    * as from 2.5, kernels no longer have an init_tasks structure
    * so we need some other way of telling a new secondary core
    * where to place its SVC stack
    --- linux-2.6.28.orig/arch/cris/arch-v32/kernel/smp.c
    +++ linux-2.6.28/arch/cris/arch-v32/kernel/smp.c
    @@ -29,11 +29,7 @@
    spinlock_t cris_atomic_locks[] = { [0 ... LOCK_COUNT - 1] = SPIN_LOCK_UNLOCKED};

    /* CPU masks */
    -cpumask_t cpu_online_map = CPU_MASK_NONE;
    -EXPORT_SYMBOL(cpu_online_map);
    cpumask_t phys_cpu_present_map = CPU_MASK_NONE;
    -cpumask_t cpu_possible_map;
    -EXPORT_SYMBOL(cpu_possible_map);
    EXPORT_SYMBOL(phys_cpu_present_map);

    /* Variables used during SMP boot */
    --- linux-2.6.28.orig/arch/ia64/kernel/smpboot.c
    +++ linux-2.6.28/arch/ia64/kernel/smpboot.c
    @@ -131,12 +131,6 @@ struct task_struct *task_for_booting_cpu
    */
    DEFINE_PER_CPU(int, cpu_state);

    -/* Bitmasks of currently online, and possible CPUs */
    -cpumask_t cpu_online_map;
    -EXPORT_SYMBOL(cpu_online_map);
    -cpumask_t cpu_possible_map = CPU_MASK_NONE;
    -EXPORT_SYMBOL(cpu_possible_map);
    -
    cpumask_t cpu_core_map[NR_CPUS] __cacheline_aligned;
    EXPORT_SYMBOL(cpu_core_map);
    DEFINE_PER_CPU_SHARED_ALIGNED(cpumask_t, cpu_sibling_map);
    --- linux-2.6.28.orig/arch/m32r/Kconfig
    +++ linux-2.6.28/arch/m32r/Kconfig
    @@ -10,6 +10,7 @@ config M32R
    default y
    select HAVE_IDE
    select HAVE_OPROFILE
    + select INIT_ALL_POSSIBLE

    config SBUS
    bool
    --- linux-2.6.28.orig/arch/m32r/kernel/smpboot.c
    +++ linux-2.6.28/arch/m32r/kernel/smpboot.c
    @@ -73,17 +73,11 @@ static unsigned int bsp_phys_id = -1;
    /* Bitmask of physically existing CPUs */
    physid_mask_t phys_cpu_present_map;

    -/* Bitmask of currently online CPUs */
    -cpumask_t cpu_online_map;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    cpumask_t cpu_bootout_map;
    cpumask_t cpu_bootin_map;
    static cpumask_t cpu_callin_map;
    cpumask_t cpu_callout_map;
    EXPORT_SYMBOL(cpu_callout_map);
    -cpumask_t cpu_possible_map = CPU_MASK_ALL;
    -EXPORT_SYMBOL(cpu_possible_map);

    /* Per CPU bogomips and other parameters */
    struct cpuinfo_m32r cpu_data[NR_CPUS] __cacheline_aligned;
    --- linux-2.6.28.orig/arch/mips/include/asm/smp.h
    +++ linux-2.6.28/arch/mips/include/asm/smp.h
    @@ -38,9 +38,6 @@ extern int __cpu_logical_map[NR_CPUS];
    #define SMP_RESCHEDULE_YOURSELF 0x1 /* XXX braindead */
    #define SMP_CALL_FUNCTION 0x2

    -extern cpumask_t phys_cpu_present_map;
    -#define cpu_possible_map phys_cpu_present_map
    -
    extern void asmlinkage smp_bootstrap(void);

    /*
    --- linux-2.6.28.orig/arch/mips/kernel/smp-cmp.c
    +++ linux-2.6.28/arch/mips/kernel/smp-cmp.c
    @@ -226,7 +226,7 @@ void __init cmp_smp_setup(void)

    for (i = 1; i < NR_CPUS; i++) {
    if (amon_cpu_avail(i)) {
    - cpu_set(i, phys_cpu_present_map);
    + cpu_set(i, cpu_possible_map);
    __cpu_number_map[i] = ++ncpu;
    __cpu_logical_map[ncpu] = i;
    }
    --- linux-2.6.28.orig/arch/mips/kernel/smp-mt.c
    +++ linux-2.6.28/arch/mips/kernel/smp-mt.c
    @@ -70,7 +70,7 @@ static unsigned int __init smvp_vpe_init
    write_vpe_c0_vpeconf0(tmp);

    /* Record this as available CPU */
    - cpu_set(tc, phys_cpu_present_map);
    + cpu_set(tc, cpu_possible_map);
    __cpu_number_map[tc] = ++ncpu;
    __cpu_logical_map[ncpu] = tc;
    }
    --- linux-2.6.28.orig/arch/mips/kernel/smp.c
    +++ linux-2.6.28/arch/mips/kernel/smp.c
    @@ -44,15 +44,10 @@
    #include
    #endif /* CONFIG_MIPS_MT_SMTC */

    -cpumask_t phys_cpu_present_map; /* Bitmask of available CPUs */
    volatile cpumask_t cpu_callin_map; /* Bitmask of started secondaries */
    -cpumask_t cpu_online_map; /* Bitmask of currently online CPUs */
    int __cpu_number_map[NR_CPUS]; /* Map physical to logical */
    int __cpu_logical_map[NR_CPUS]; /* Map logical to physical */

    -EXPORT_SYMBOL(phys_cpu_present_map);
    -EXPORT_SYMBOL(cpu_online_map);
    -
    extern void cpu_idle(void);

    /* Number of TCs (or siblings in Intel speak) per CPU core */
    @@ -199,7 +194,7 @@ void __devinit smp_prepare_boot_cpu(void
    */
    __cpu_number_map[0] = 0;
    __cpu_logical_map[0] = 0;
    - cpu_set(0, phys_cpu_present_map);
    + cpu_set(0, cpu_possible_map);
    cpu_set(0, cpu_online_map);
    cpu_set(0, cpu_callin_map);
    }
    --- linux-2.6.28.orig/arch/mips/kernel/smtc.c
    +++ linux-2.6.28/arch/mips/kernel/smtc.c
    @@ -290,7 +290,7 @@ static void smtc_configure_tlb(void)
    * possibly leave some TCs/VPEs as "slave" processors.
    *
    * Use c0_MVPConf0 to find out how many TCs are available, setting up
    - * phys_cpu_present_map and the logical/physical mappings.
    + * cpu_possible_map and the logical/physical mappings.
    */

    int __init smtc_build_cpu_map(int start_cpu_slot)
    @@ -304,7 +304,7 @@ int __init smtc_build_cpu_map(int start_
    */
    ntcs = ((read_c0_mvpconf0() & MVPCONF0_PTC) >> MVPCONF0_PTC_SHIFT) + 1;
    for (i=start_cpu_slot; i - cpu_set(i, phys_cpu_present_map);
    + cpu_set(i, cpu_possible_map);
    __cpu_number_map[i] = i;
    __cpu_logical_map[i] = i;
    }
    @@ -521,7 +521,7 @@ void smtc_prepare_cpus(int cpus)
    * Pull any physically present but unused TCs out of circulation.
    */
    while (tc < (((val & MVPCONF0_PTC) >> MVPCONF0_PTC_SHIFT) + 1)) {
    - cpu_clear(tc, phys_cpu_present_map);
    + cpu_clear(tc, cpu_possible_map);
    cpu_clear(tc, cpu_present_map);
    tc++;
    }
    --- linux-2.6.28.orig/arch/mips/pmc-sierra/yosemite/smp.c
    +++ linux-2.6.28/arch/mips/pmc-sierra/yosemite/smp.c
    @@ -141,7 +141,7 @@ static void __cpuinit yos_boot_secondary
    }

    /*
    - * Detect available CPUs, populate phys_cpu_present_map before smp_init
    + * Detect available CPUs, populate cpu_possible_map before smp_init
    *
    * We don't want to start the secondary CPU yet nor do we have a nice probing
    * feature in PMON so we just assume presence of the secondary core.
    @@ -150,10 +150,10 @@ static void __init yos_smp_setup(void)
    {
    int i;

    - cpus_clear(phys_cpu_present_map);
    + cpus_clear(cpu_possible_map);

    for (i = 0; i < 2; i++) {
    - cpu_set(i, phys_cpu_present_map);
    + cpu_set(i, cpu_possible_map);
    __cpu_number_map[i] = i;
    __cpu_logical_map[i] = i;
    }
    --- linux-2.6.28.orig/arch/mips/sgi-ip27/ip27-smp.c
    +++ linux-2.6.28/arch/mips/sgi-ip27/ip27-smp.c
    @@ -76,7 +76,7 @@ static int do_cpumask(cnodeid_t cnode, n
    /* Only let it join in if it's marked enabled */
    if ((acpu->cpu_info.flags & KLINFO_ENABLE) &&
    (tot_cpus_found != NR_CPUS)) {
    - cpu_set(cpuid, phys_cpu_present_map);
    + cpu_set(cpuid, cpu_possible_map);
    alloc_cpupda(cpuid, tot_cpus_found);
    cpus_found++;
    tot_cpus_found++;
    --- linux-2.6.28.orig/arch/mips/sibyte/bcm1480/smp.c
    +++ linux-2.6.28/arch/mips/sibyte/bcm1480/smp.c
    @@ -136,7 +136,7 @@ static void __cpuinit bcm1480_boot_secon

    /*
    * Use CFE to find out how many CPUs are available, setting up
    - * phys_cpu_present_map and the logical/physical mappings.
    + * cpu_possible_map and the logical/physical mappings.
    * XXXKW will the boot CPU ever not be physical 0?
    *
    * Common setup before any secondaries are started
    @@ -145,14 +145,14 @@ static void __init bcm1480_smp_setup(voi
    {
    int i, num;

    - cpus_clear(phys_cpu_present_map);
    - cpu_set(0, phys_cpu_present_map);
    + cpus_clear(cpu_possible_map);
    + cpu_set(0, cpu_possible_map);
    __cpu_number_map[0] = 0;
    __cpu_logical_map[0] = 0;

    for (i = 1, num = 0; i < NR_CPUS; i++) {
    if (cfe_cpu_stop(i) == 0) {
    - cpu_set(i, phys_cpu_present_map);
    + cpu_set(i, cpu_possible_map);
    __cpu_number_map[i] = ++num;
    __cpu_logical_map[num] = i;
    }
    --- linux-2.6.28.orig/arch/mips/sibyte/sb1250/smp.c
    +++ linux-2.6.28/arch/mips/sibyte/sb1250/smp.c
    @@ -124,7 +124,7 @@ static void __cpuinit sb1250_boot_second

    /*
    * Use CFE to find out how many CPUs are available, setting up
    - * phys_cpu_present_map and the logical/physical mappings.
    + * cpu_possible_map and the logical/physical mappings.
    * XXXKW will the boot CPU ever not be physical 0?
    *
    * Common setup before any secondaries are started
    @@ -133,14 +133,14 @@ static void __init sb1250_smp_setup(void
    {
    int i, num;

    - cpus_clear(phys_cpu_present_map);
    - cpu_set(0, phys_cpu_present_map);
    + cpus_clear(cpu_possible_map);
    + cpu_set(0, cpu_possible_map);
    __cpu_number_map[0] = 0;
    __cpu_logical_map[0] = 0;

    for (i = 1, num = 0; i < NR_CPUS; i++) {
    if (cfe_cpu_stop(i) == 0) {
    - cpu_set(i, phys_cpu_present_map);
    + cpu_set(i, cpu_possible_map);
    __cpu_number_map[i] = ++num;
    __cpu_logical_map[num] = i;
    }
    --- linux-2.6.28.orig/arch/parisc/Kconfig
    +++ linux-2.6.28/arch/parisc/Kconfig
    @@ -11,6 +11,7 @@ config PARISC
    select HAVE_OPROFILE
    select RTC_CLASS
    select RTC_DRV_PARISC
    + select INIT_ALL_POSSIBLE
    help
    The PA-RISC microprocessor is designed by Hewlett-Packard and used
    in many of their workstations & servers (HP9000 700 and 800 series,
    --- linux-2.6.28.orig/arch/parisc/kernel/smp.c
    +++ linux-2.6.28/arch/parisc/kernel/smp.c
    @@ -67,21 +67,6 @@ static volatile int cpu_now_booting __re

    static int parisc_max_cpus __read_mostly = 1;

    -/* online cpus are ones that we've managed to bring up completely
    - * possible cpus are all valid cpu
    - * present cpus are all detected cpu
    - *
    - * On startup we bring up the "possible" cpus. Since we discover
    - * CPUs later, we add them as hotplug, so the possible cpu mask is
    - * empty in the beginning.
    - */
    -
    -cpumask_t cpu_online_map __read_mostly = CPU_MASK_NONE; /* Bitmap of online CPUs */
    -cpumask_t cpu_possible_map __read_mostly = CPU_MASK_ALL; /* Bitmap of Present CPUs */
    -
    -EXPORT_SYMBOL(cpu_online_map);
    -EXPORT_SYMBOL(cpu_possible_map);
    -
    DEFINE_PER_CPU(spinlock_t, ipi_lock) = SPIN_LOCK_UNLOCKED;

    enum ipi_message_type {
    --- linux-2.6.28.orig/arch/powerpc/kernel/smp.c
    +++ linux-2.6.28/arch/powerpc/kernel/smp.c
    @@ -60,13 +60,9 @@
    int smp_hw_index[NR_CPUS];
    struct thread_info *secondary_ti;

    -cpumask_t cpu_possible_map = CPU_MASK_NONE;
    -cpumask_t cpu_online_map = CPU_MASK_NONE;
    DEFINE_PER_CPU(cpumask_t, cpu_sibling_map) = CPU_MASK_NONE;
    DEFINE_PER_CPU(cpumask_t, cpu_core_map) = CPU_MASK_NONE;

    -EXPORT_SYMBOL(cpu_online_map);
    -EXPORT_SYMBOL(cpu_possible_map);
    EXPORT_PER_CPU_SYMBOL(cpu_sibling_map);
    EXPORT_PER_CPU_SYMBOL(cpu_core_map);

    --- linux-2.6.28.orig/arch/s390/Kconfig
    +++ linux-2.6.28/arch/s390/Kconfig
    @@ -75,6 +75,7 @@ config S390
    select HAVE_KRETPROBES
    select HAVE_KVM if 64BIT
    select HAVE_ARCH_TRACEHOOK
    + select INIT_ALL_POSSIBLE

    source "init/Kconfig"

    --- linux-2.6.28.orig/arch/s390/kernel/smp.c
    +++ linux-2.6.28/arch/s390/kernel/smp.c
    @@ -52,12 +52,6 @@
    struct _lowcore *lowcore_ptr[NR_CPUS];
    EXPORT_SYMBOL(lowcore_ptr);

    -cpumask_t cpu_online_map = CPU_MASK_NONE;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    -cpumask_t cpu_possible_map = CPU_MASK_ALL;
    -EXPORT_SYMBOL(cpu_possible_map);
    -
    static struct task_struct *current_set[NR_CPUS];

    static u8 smp_cpu_type;
    --- linux-2.6.28.orig/arch/sh/kernel/smp.c
    +++ linux-2.6.28/arch/sh/kernel/smp.c
    @@ -30,12 +30,6 @@
    int __cpu_number_map[NR_CPUS]; /* Map physical to logical */
    int __cpu_logical_map[NR_CPUS]; /* Map logical to physical */

    -cpumask_t cpu_possible_map;
    -EXPORT_SYMBOL(cpu_possible_map);
    -
    -cpumask_t cpu_online_map;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    static inline void __init smp_store_cpu_info(unsigned int cpu)
    {
    struct sh_cpuinfo *c = cpu_data + cpu;
    --- linux-2.6.28.orig/arch/sparc/include/asm/smp_32.h
    +++ linux-2.6.28/arch/sparc/include/asm/smp_32.h
    @@ -29,8 +29,6 @@
    */

    extern unsigned char boot_cpu_id;
    -extern cpumask_t phys_cpu_present_map;
    -#define cpu_possible_map phys_cpu_present_map

    typedef void (*smpfunc_t)(unsigned long, unsigned long, unsigned long,
    unsigned long, unsigned long);
    --- linux-2.6.28.orig/arch/sparc/kernel/smp.c
    +++ linux-2.6.28/arch/sparc/kernel/smp.c
    @@ -39,8 +39,6 @@ volatile unsigned long cpu_callin_map[NR
    unsigned char boot_cpu_id = 0;
    unsigned char boot_cpu_id4 = 0; /* boot_cpu_id << 2 */

    -cpumask_t cpu_online_map = CPU_MASK_NONE;
    -cpumask_t phys_cpu_present_map = CPU_MASK_NONE;
    cpumask_t smp_commenced_mask = CPU_MASK_NONE;

    /* The only guaranteed locking primitive available on all Sparc
    @@ -334,7 +332,7 @@ void __init smp_setup_cpu_possible_map(v
    instance = 0;
    while (!cpu_find_by_instance(instance, NULL, &mid)) {
    if (mid < NR_CPUS) {
    - cpu_set(mid, phys_cpu_present_map);
    + cpu_set(mid, cpu_possible_map);
    cpu_set(mid, cpu_present_map);
    }
    instance++;
    @@ -354,7 +352,7 @@ void __init smp_prepare_boot_cpu(void)

    current_thread_info()->cpu = cpuid;
    cpu_set(cpuid, cpu_online_map);
    - cpu_set(cpuid, phys_cpu_present_map);
    + cpu_set(cpuid, cpu_possible_map);
    }

    int __cpuinit __cpu_up(unsigned int cpu)
    --- linux-2.6.28.orig/arch/sparc/kernel/sparc_ksyms.c
    +++ linux-2.6.28/arch/sparc/kernel/sparc_ksyms.c
    @@ -113,10 +113,6 @@ EXPORT_PER_CPU_SYMBOL(__cpu_data);
    #ifdef CONFIG_SMP
    /* IRQ implementation. */
    EXPORT_SYMBOL(synchronize_irq);
    -
    -/* CPU online map and active count. */
    -EXPORT_SYMBOL(cpu_online_map);
    -EXPORT_SYMBOL(phys_cpu_present_map);
    #endif

    EXPORT_SYMBOL(__udelay);
    --- linux-2.6.28.orig/arch/sparc64/kernel/smp.c
    +++ linux-2.6.28/arch/sparc64/kernel/smp.c
    @@ -49,14 +49,10 @@

    int sparc64_multi_core __read_mostly;

    -cpumask_t cpu_possible_map __read_mostly = CPU_MASK_NONE;
    -cpumask_t cpu_online_map __read_mostly = CPU_MASK_NONE;
    DEFINE_PER_CPU(cpumask_t, cpu_sibling_map) = CPU_MASK_NONE;
    cpumask_t cpu_core_map[NR_CPUS] __read_mostly =
    { [0 ... NR_CPUS-1] = CPU_MASK_NONE };

    -EXPORT_SYMBOL(cpu_possible_map);
    -EXPORT_SYMBOL(cpu_online_map);
    EXPORT_PER_CPU_SYMBOL(cpu_sibling_map);
    EXPORT_SYMBOL(cpu_core_map);

    --- linux-2.6.28.orig/arch/um/kernel/smp.c
    +++ linux-2.6.28/arch/um/kernel/smp.c
    @@ -25,13 +25,6 @@ DEFINE_PER_CPU(struct mmu_gather, mmu_ga
    #include "irq_user.h"
    #include "os.h"

    -/* CPU online map, set by smp_boot_cpus */
    -cpumask_t cpu_online_map = CPU_MASK_NONE;
    -cpumask_t cpu_possible_map = CPU_MASK_NONE;
    -
    -EXPORT_SYMBOL(cpu_online_map);
    -EXPORT_SYMBOL(cpu_possible_map);
    -
    /* Per CPU bogomips and other parameters
    * The only piece used here is the ipi pipe, which is set before SMP is
    * started and never changed.
    --- linux-2.6.28.orig/arch/x86/kernel/smpboot.c
    +++ linux-2.6.28/arch/x86/kernel/smpboot.c
    @@ -101,14 +101,8 @@ EXPORT_SYMBOL(smp_num_siblings);
    /* Last level cache ID of each logical CPU */
    DEFINE_PER_CPU(u16, cpu_llc_id) = BAD_APICID;

    -/* bitmap of online cpus */
    -cpumask_t cpu_online_map __read_mostly;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    cpumask_t cpu_callin_map;
    cpumask_t cpu_callout_map;
    -cpumask_t cpu_possible_map;
    -EXPORT_SYMBOL(cpu_possible_map);

    /* representing HT siblings of each logical CPU */
    DEFINE_PER_CPU(cpumask_t, cpu_sibling_map);
    --- linux-2.6.28.orig/arch/x86/mach-voyager/voyager_smp.c
    +++ linux-2.6.28/arch/x86/mach-voyager/voyager_smp.c
    @@ -62,11 +62,6 @@ static int voyager_extended_cpus = 1;
    /* Used for the invalidate map that's also checked in the spinlock */
    static volatile unsigned long smp_invalidate_needed;

    -/* Bitmask of currently online CPUs - used by setup.c for
    - /proc/cpuinfo, visible externally but still physical */
    -cpumask_t cpu_online_map = CPU_MASK_NONE;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    /* Bitmask of CPUs present in the system - exported by i386_syms.c, used
    * by scheduler but indexed physically */
    cpumask_t phys_cpu_present_map = CPU_MASK_NONE;
    @@ -216,8 +211,6 @@ static cpumask_t smp_commenced_mask = CP
    /* This is for the new dynamic CPU boot code */
    cpumask_t cpu_callin_map = CPU_MASK_NONE;
    cpumask_t cpu_callout_map = CPU_MASK_NONE;
    -cpumask_t cpu_possible_map = CPU_MASK_NONE;
    -EXPORT_SYMBOL(cpu_possible_map);

    /* The per processor IRQ masks (these are usually kept in sync) */
    static __u16 vic_irq_mask[NR_CPUS] __cacheline_aligned;
    --- linux-2.6.28.orig/init/Kconfig
    +++ linux-2.6.28/init/Kconfig
    @@ -911,6 +911,15 @@ config KMOD

    endif # MODULES

    +config INIT_ALL_POSSIBLE
    + bool
    + help
    + Back when each arch used to define their own cpu_online_map and
    + cpu_possible_map, some of them chose to initialize cpu_possible_map
    + with all 1s, and others with all 0s. When they were centralised,
    + it was better to provide this option than to break all the archs
    + and have several arch maintainers persuing me down dark alleys.
    +
    config STOP_MACHINE
    bool
    default y
    --- linux-2.6.28.orig/kernel/cpu.c
    +++ linux-2.6.28/kernel/cpu.c
    @@ -24,19 +24,20 @@
    cpumask_t cpu_present_map __read_mostly;
    EXPORT_SYMBOL(cpu_present_map);

    -#ifndef CONFIG_SMP
    -
    /*
    * Represents all cpu's that are currently online.
    */
    -cpumask_t cpu_online_map __read_mostly = CPU_MASK_ALL;
    +cpumask_t cpu_online_map __read_mostly;
    EXPORT_SYMBOL(cpu_online_map);

    +#ifdef CONFIG_INIT_ALL_POSSIBLE
    cpumask_t cpu_possible_map __read_mostly = CPU_MASK_ALL;
    +#else
    +cpumask_t cpu_possible_map __read_mostly;
    +#endif
    EXPORT_SYMBOL(cpu_possible_map);

    -#else /* CONFIG_SMP */
    -
    +#ifdef CONFIG_SMP
    /* Serializes the updates to cpu_online_map, cpu_present_map */
    static DEFINE_MUTEX(cpu_add_remove_lock);


    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  5. [PATCH 29/35] cpumask: switch over to cpu_online/possible/active/present_mask From: Rusty Russell <rusty@rustcorp.com.au>

    In order to hide the definition of struct cpumask, we need to expose
    only pointers. Plus, it fits the new API far better to have pointers.

    This deprecates the old _map versions, and defines them in terms of the
    _mask versions. It also centralizes the definitions (finally!).

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    arch/alpha/kernel/smp.c | 5 --
    arch/arm/kernel/smp.c | 10 ----
    arch/cris/arch-v32/kernel/smp.c | 4 -
    arch/ia64/kernel/smpboot.c | 6 --
    arch/m32r/kernel/smpboot.c | 6 --
    arch/mips/kernel/smp.c | 2
    arch/parisc/kernel/smp.c | 15 ------
    arch/powerpc/kernel/smp.c | 4 -
    arch/s390/kernel/smp.c | 6 --
    arch/sh/kernel/smp.c | 6 --
    arch/sparc/kernel/sparc_ksyms.c | 2
    include/linux/cpumask.h | 84 ++++++++++++++++++++----------------------------
    kernel/cpu.c | 71 +++++++++++++++++++---------------------
    2 files changed, 70 insertions(+), 85 deletions(-)

    --- linux-2.6.28.orig/include/linux/cpumask.h
    +++ linux-2.6.28/include/linux/cpumask.h
    @@ -87,9 +87,9 @@
    * int cpumask_any_and(mask1,mask2) Any cpu in both masks
    * int cpumask_any_but(mask,cpu) Any cpu in mask except cpu
    *
    - * for_each_possible_cpu(cpu) for-loop cpu over cpu_possible_map
    - * for_each_online_cpu(cpu) for-loop cpu over cpu_online_map
    - * for_each_present_cpu(cpu) for-loop cpu over cpu_present_map
    + * for_each_possible_cpu(cpu) for-loop cpu over cpu_possible_mask
    + * for_each_online_cpu(cpu) for-loop cpu over cpu_online_mask
    + * for_each_present_cpu(cpu) for-loop cpu over cpu_present_mask
    *
    * Subtlety:
    * 1) The 'type-checked' form of cpu_isset() causes gcc (3.3.2, anyway)
    @@ -160,7 +160,7 @@ extern cpumask_t _unused_cpumask_arg_;
    for_each_cpu_and(cpu, &(mask), &(and))
    #define first_cpu(src) cpumask_first(&(src))
    #define next_cpu(n, src) cpumask_next((n), &(src))
    -#define any_online_cpu(mask) cpumask_any_and(&(mask), &cpu_online_map)
    +#define any_online_cpu(mask) cpumask_any_and(&(mask), cpu_online_mask)
    #if NR_CPUS > BITS_PER_LONG
    #define CPUMASK_ALLOC(m) struct m *m = kmalloc(sizeof(*m), GFP_KERNEL)
    #define CPUMASK_FREE(m) kfree(m)
    @@ -169,6 +169,11 @@ extern cpumask_t _unused_cpumask_arg_;
    #define CPUMASK_FREE(m)
    #endif
    #define CPUMASK_PTR(v, m) cpumask_t *v = &(m->v)
    +/* These strip const, as traditionally they weren't const. */
    +#define cpu_possible_map (*(cpumask_t *)cpu_possible_mask)
    +#define cpu_online_map (*(cpumask_t *)cpu_online_mask)
    +#define cpu_present_map (*(cpumask_t *)cpu_present_mask)
    +#define cpu_active_map (*(cpumask_t *)cpu_active_mask)
    /* End deprecated region. */

    #if NR_CPUS > 1
    @@ -512,65 +517,48 @@ static inline void free_cpumask_var(cpum

    /*
    * The following particular system cpumasks and operations manage
    - * possible, present, active and online cpus. Each of them is a fixed size
    - * bitmap of size NR_CPUS.
    + * possible, present, active and online cpus.
    *
    - * #ifdef CONFIG_HOTPLUG_CPU
    - * cpu_possible_map - has bit 'cpu' set iff cpu is populatable
    - * cpu_present_map - has bit 'cpu' set iff cpu is populated
    - * cpu_online_map - has bit 'cpu' set iff cpu available to scheduler
    - * cpu_active_map - has bit 'cpu' set iff cpu available to migration
    - * #else
    - * cpu_possible_map - has bit 'cpu' set iff cpu is populated
    - * cpu_present_map - copy of cpu_possible_map
    - * cpu_online_map - has bit 'cpu' set iff cpu available to scheduler
    - * #endif
    - *
    - * In either case, NR_CPUS is fixed at compile time, as the static
    - * size of these bitmaps. The cpu_possible_map is fixed at boot
    - * time, as the set of CPU id's that it is possible might ever
    - * be plugged in at anytime during the life of that system boot.
    - * The cpu_present_map is dynamic(*), representing which CPUs
    - * are currently plugged in. And cpu_online_map is the dynamic
    - * subset of cpu_present_map, indicating those CPUs available
    - * for scheduling.
    + * cpu_possible_mask- has bit 'cpu' set iff cpu is populatable
    + * cpu_present_mask - has bit 'cpu' set iff cpu is populated
    + * cpu_online_mask - has bit 'cpu' set iff cpu available to scheduler
    + * cpu_active_mask - has bit 'cpu' set iff cpu available to migration
    + *
    + * If !CONFIG_HOTPLUG_CPU, present == possible, and active == online.
    + *
    + * The cpu_possible_mask is fixed at boot time, as the set of CPU id's
    + * that it is possible might ever be plugged in at anytime during the
    + * life of that system boot. The cpu_present_mask is dynamic(*),
    + * representing which CPUs are currently plugged in. And
    + * cpu_online_mask is the dynamic subset of cpu_present_mask,
    + * indicating those CPUs available for scheduling.
    *
    - * If HOTPLUG is enabled, then cpu_possible_map is forced to have
    + * If HOTPLUG is enabled, then cpu_possible_mask is forced to have
    * all NR_CPUS bits set, otherwise it is just the set of CPUs that
    * ACPI reports present at boot.
    *
    - * If HOTPLUG is enabled, then cpu_present_map varies dynamically,
    + * If HOTPLUG is enabled, then cpu_present_mask varies dynamically,
    * depending on what ACPI reports as currently plugged in, otherwise
    - * cpu_present_map is just a copy of cpu_possible_map.
    + * cpu_present_mask is just a copy of cpu_possible_mask.
    *
    - * (*) Well, cpu_present_map is dynamic in the hotplug case. If not
    - * hotplug, it's a copy of cpu_possible_map, hence fixed at boot.
    + * (*) Well, cpu_present_mask is dynamic in the hotplug case. If not
    + * hotplug, it's a copy of cpu_possible_mask, hence fixed at boot.
    *
    * Subtleties:
    * 1) UP arch's (NR_CPUS == 1, CONFIG_SMP not defined) hardcode
    * assumption that their single CPU is online. The UP
    - * cpu_{online,possible,present}_maps are placebos. Changing them
    + * cpu_{online,possible,present}_masks are placebos. Changing them
    * will have no useful affect on the following num_*_cpus()
    * and cpu_*() macros in the UP case. This ugliness is a UP
    * optimization - don't waste any instructions or memory references
    * asking if you're online or how many CPUs there are if there is
    * only one CPU.
    - * 2) Most SMP arch's #define some of these maps to be some
    - * other map specific to that arch. Therefore, the following
    - * must be #define macros, not inlines. To see why, examine
    - * the assembly code produced by the following. Note that
    - * set1() writes phys_x_map, but set2() writes x_map:
    - * int x_map, phys_x_map;
    - * #define set1(a) x_map = a
    - * inline void set2(int a) { x_map = a; }
    - * #define x_map phys_x_map
    - * main(){ set1(3); set2(5); }
    */

    -extern cpumask_t cpu_possible_map;
    -extern cpumask_t cpu_online_map;
    -extern cpumask_t cpu_present_map;
    -extern cpumask_t cpu_active_map;
    +extern const struct cpumask *const cpu_possible_mask;
    +extern const struct cpumask *const cpu_online_mask;
    +extern const struct cpumask *const cpu_present_mask;
    +extern const struct cpumask *const cpu_active_mask;

    #if NR_CPUS > 1
    #define num_online_cpus() cpus_weight(cpu_online_map)
    @@ -601,8 +589,8 @@ void init_cpu_online(const struct cpumas

    #define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))

    -#define for_each_possible_cpu(cpu) for_each_cpu((cpu), &cpu_possible_map)
    -#define for_each_online_cpu(cpu) for_each_cpu((cpu), &cpu_online_map)
    -#define for_each_present_cpu(cpu) for_each_cpu((cpu), &cpu_present_map)
    +#define for_each_possible_cpu(cpu) for_each_cpu((cpu), cpu_possible_mask)
    +#define for_each_online_cpu(cpu) for_each_cpu((cpu), cpu_online_mask)
    +#define for_each_present_cpu(cpu) for_each_cpu((cpu), cpu_present_mask)

    #endif /* __LINUX_CPUMASK_H */
    --- linux-2.6.28.orig/kernel/cpu.c
    +++ linux-2.6.28/kernel/cpu.c
    @@ -15,30 +15,8 @@
    #include
    #include

    -/*
    - * Represents all cpu's present in the system
    - * In systems capable of hotplug, this map could dynamically grow
    - * as new cpu's are detected in the system via any platform specific
    - * method, such as ACPI for e.g.
    - */
    -cpumask_t cpu_present_map __read_mostly;
    -EXPORT_SYMBOL(cpu_present_map);
    -
    -/*
    - * Represents all cpu's that are currently online.
    - */
    -cpumask_t cpu_online_map __read_mostly;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    -#ifdef CONFIG_INIT_ALL_POSSIBLE
    -cpumask_t cpu_possible_map __read_mostly = CPU_MASK_ALL;
    -#else
    -cpumask_t cpu_possible_map __read_mostly;
    -#endif
    -EXPORT_SYMBOL(cpu_possible_map);
    -
    #ifdef CONFIG_SMP
    -/* Serializes the updates to cpu_online_map, cpu_present_map */
    +/* Serializes the updates to cpu_online_mask, cpu_present_mask */
    static DEFINE_MUTEX(cpu_add_remove_lock);

    static __cpuinitdata RAW_NOTIFIER_HEAD(cpu_chain);
    @@ -65,8 +43,6 @@ void __init cpu_hotplug_init(void)
    cpu_hotplug.refcount = 0;
    }

    -cpumask_t cpu_active_map;
    -
    #ifdef CONFIG_HOTPLUG_CPU

    void get_online_cpus(void)
    @@ -97,7 +73,7 @@ EXPORT_SYMBOL_GPL(put_online_cpus);

    /*
    * The following two API's must be used when attempting
    - * to serialize the updates to cpu_online_map, cpu_present_map.
    + * to serialize the updates to cpu_online_mask, cpu_present_mask.
    */
    void cpu_maps_update_begin(void)
    {
    @@ -501,43 +477,64 @@ const unsigned long cpu_bit_bitmap[BITS_
    };
    EXPORT_SYMBOL_GPL(cpu_bit_bitmap);

    +#ifdef CONFIG_INIT_ALL_POSSIBLE
    +static DECLARE_BITMAP(cpu_possible_bits, CONFIG_NR_CPUS) __read_mostly
    + = CPU_BITS_ALL;
    +#else
    +static DECLARE_BITMAP(cpu_possible_bits, CONFIG_NR_CPUS) __read_mostly;
    +#endif
    +const struct cpumask *const cpu_possible_mask = to_cpumask(cpu_possible_bits);
    +EXPORT_SYMBOL(cpu_possible_mask);
    +
    +static DECLARE_BITMAP(cpu_online_bits, CONFIG_NR_CPUS) __read_mostly;
    +const struct cpumask *const cpu_online_mask = to_cpumask(cpu_online_bits);
    +EXPORT_SYMBOL(cpu_online_mask);
    +
    +static DECLARE_BITMAP(cpu_present_bits, CONFIG_NR_CPUS) __read_mostly;
    +const struct cpumask *const cpu_present_mask = to_cpumask(cpu_present_bits);
    +EXPORT_SYMBOL(cpu_present_mask);
    +
    +static DECLARE_BITMAP(cpu_active_bits, CONFIG_NR_CPUS) __read_mostly;
    +const struct cpumask *const cpu_active_mask = to_cpumask(cpu_active_bits);
    +EXPORT_SYMBOL(cpu_active_mask);
    +
    void set_cpu_possible(unsigned int cpu, bool possible)
    {
    if (possible)
    - cpumask_set_cpu(cpu, &cpu_possible_map);
    + cpumask_set_cpu(cpu, to_cpumask(cpu_possible_bits));
    else
    - cpumask_clear_cpu(cpu, &cpu_possible_map);
    + cpumask_clear_cpu(cpu, to_cpumask(cpu_possible_bits));
    }
    void set_cpu_present(unsigned int cpu, bool present)
    {
    if (present)
    - cpumask_set_cpu(cpu, &cpu_present_map);
    + cpumask_set_cpu(cpu, to_cpumask(cpu_present_bits));
    else
    - cpumask_clear_cpu(cpu, &cpu_present_map);
    + cpumask_clear_cpu(cpu, to_cpumask(cpu_present_bits));
    }
    void set_cpu_online(unsigned int cpu, bool online)
    {
    if (online)
    - cpumask_set_cpu(cpu, &cpu_online_map);
    + cpumask_set_cpu(cpu, to_cpumask(cpu_online_bits));
    else
    - cpumask_clear_cpu(cpu, &cpu_online_map);
    + cpumask_clear_cpu(cpu, to_cpumask(cpu_online_bits));
    }
    void set_cpu_active(unsigned int cpu, bool active)
    {
    if (active)
    - cpumask_set_cpu(cpu, &cpu_active_map);
    + cpumask_set_cpu(cpu, to_cpumask(cpu_active_bits));
    else
    - cpumask_clear_cpu(cpu, &cpu_active_map);
    + cpumask_clear_cpu(cpu, to_cpumask(cpu_active_bits));
    }
    void init_cpu_present(const struct cpumask *src)
    {
    - cpumask_copy(&cpu_present_map, src);
    + cpumask_copy(to_cpumask(cpu_present_bits), src);
    }
    void init_cpu_possible(const struct cpumask *src)
    {
    - cpumask_copy(&cpu_possible_map, src);
    + cpumask_copy(to_cpumask(cpu_possible_bits), src);
    }
    void init_cpu_online(const struct cpumask *src)
    {
    - cpumask_copy(&cpu_online_map, src);
    + cpumask_copy(to_cpumask(cpu_online_bits), src);
    }

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  6. [PATCH 18/35] cpumask: use cpumask_bits() everywhere.

    Instead of accessing ->bits, we use cpumask_bits(). This will be very
    useful when 'struct cpumask' has a hidden definition.

    From: Rusty Russell
    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    include/linux/cpumask.h | 70 ++++++++++++++++++++++++++++-------------------
    include/linux/seq_file.h | 2 -
    kernel/time/timer_list.c | 4 +-
    lib/cpumask.c | 4 +-
    4 files changed, 47 insertions(+), 33 deletions(-)

    --- linux-2.6.28.orig/include/linux/cpumask.h
    +++ linux-2.6.28/include/linux/cpumask.h
    @@ -194,12 +194,12 @@ extern int nr_cpu_ids;

    static inline void cpumask_set_cpu(int cpu, volatile struct cpumask *dstp)
    {
    - set_bit(cpu, dstp->bits);
    + set_bit(cpu, cpumask_bits(dstp));
    }

    static inline void cpumask_clear_cpu(int cpu, volatile struct cpumask *dstp)
    {
    - clear_bit(cpu, dstp->bits);
    + clear_bit(cpu, cpumask_bits(dstp));
    }

    /* No static inline type checking - see Subtlety (1) above. */
    @@ -207,130 +207,142 @@ static inline void cpumask_clear_cpu(int

    static inline int cpumask_test_and_set_cpu(int cpu, struct cpumask *addr)
    {
    - return test_and_set_bit(cpu, addr->bits);
    + return test_and_set_bit(cpu, cpumask_bits(addr));
    }

    static inline void cpumask_setall(struct cpumask *dstp)
    {
    - bitmap_fill(dstp->bits, nr_cpumask_bits);
    + bitmap_fill(cpumask_bits(dstp), nr_cpumask_bits);
    }

    static inline void cpumask_clear(struct cpumask *dstp)
    {
    - bitmap_zero(dstp->bits, nr_cpumask_bits);
    + bitmap_zero(cpumask_bits(dstp), nr_cpumask_bits);
    }

    static inline void cpumask_and(struct cpumask *dstp,
    const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - bitmap_and(dstp->bits, src1p->bits, src2p->bits, nr_cpumask_bits);
    + bitmap_and(cpumask_bits(dstp), cpumask_bits(src1p),
    + cpumask_bits(src2p), nr_cpumask_bits);
    }

    static inline void cpumask_or(struct cpumask *dstp, const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - bitmap_or(dstp->bits, src1p->bits, src2p->bits, nr_cpumask_bits);
    + bitmap_or(cpumask_bits(dstp), cpumask_bits(src1p),
    + cpumask_bits(src2p), nr_cpumask_bits);
    }

    static inline void cpumask_xor(struct cpumask *dstp,
    const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - bitmap_xor(dstp->bits, src1p->bits, src2p->bits, nr_cpumask_bits);
    + bitmap_xor(cpumask_bits(dstp), cpumask_bits(src1p),
    + cpumask_bits(src2p), nr_cpumask_bits);
    }

    static inline void cpumask_andnot(struct cpumask *dstp,
    const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - bitmap_andnot(dstp->bits, src1p->bits, src2p->bits, nr_cpumask_bits);
    + bitmap_andnot(cpumask_bits(dstp), cpumask_bits(src1p),
    + cpumask_bits(src2p), nr_cpumask_bits);
    }

    static inline void cpumask_complement(struct cpumask *dstp,
    const struct cpumask *srcp)
    {
    - bitmap_complement(dstp->bits, srcp->bits, nr_cpumask_bits);
    + bitmap_complement(cpumask_bits(dstp), cpumask_bits(srcp),
    + nr_cpumask_bits);
    }

    static inline int cpumask_equal(const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - return bitmap_equal(src1p->bits, src2p->bits, nr_cpumask_bits);
    + return bitmap_equal(cpumask_bits(src1p), cpumask_bits(src2p),
    + nr_cpumask_bits);
    }

    static inline int cpumask_intersects(const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - return bitmap_intersects(src1p->bits, src2p->bits, nr_cpumask_bits);
    + return bitmap_intersects(cpumask_bits(src1p), cpumask_bits(src2p),
    + nr_cpumask_bits);
    }

    static inline int cpumask_subset(const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - return bitmap_subset(src1p->bits, src2p->bits, nr_cpumask_bits);
    + return bitmap_subset(cpumask_bits(src1p), cpumask_bits(src2p),
    + nr_cpumask_bits);
    }

    static inline int cpumask_empty(const struct cpumask *srcp)
    {
    - return bitmap_empty(srcp->bits, nr_cpumask_bits);
    + return bitmap_empty(cpumask_bits(srcp), nr_cpumask_bits);
    }

    static inline int cpumask_full(const struct cpumask *srcp)
    {
    - return bitmap_full(srcp->bits, nr_cpumask_bits);
    + return bitmap_full(cpumask_bits(srcp), nr_cpumask_bits);
    }

    static inline int __cpus_weight(const cpumask_t *srcp, int nbits)
    {
    - return bitmap_weight(srcp->bits, nbits);
    + return bitmap_weight(cpumask_bits(srcp), nbits);
    }

    static inline int cpumask_weight(const struct cpumask *srcp)
    {
    - return bitmap_weight(srcp->bits, nr_cpumask_bits);
    + return bitmap_weight(cpumask_bits(srcp), nr_cpumask_bits);
    }

    static inline void cpumask_shift_right(struct cpumask *dstp,
    const struct cpumask *srcp, int n)
    {
    - bitmap_shift_right(dstp->bits, srcp->bits, n, nr_cpumask_bits);
    + bitmap_shift_right(cpumask_bits(dstp), cpumask_bits(srcp), n,
    + nr_cpumask_bits);
    }

    static inline void cpumask_shift_left(struct cpumask *dstp,
    const struct cpumask *srcp, int n)
    {
    - bitmap_shift_left(dstp->bits, srcp->bits, n, nr_cpumask_bits);
    + bitmap_shift_left(cpumask_bits(dstp), cpumask_bits(srcp), n,
    + nr_cpumask_bits);
    }

    static inline int cpumask_scnprintf(char *buf, int len,
    const struct cpumask *srcp)
    {
    - return bitmap_scnprintf(buf, len, srcp->bits, nr_cpumask_bits);
    + return bitmap_scnprintf(buf, len, cpumask_bits(srcp), nr_cpumask_bits);
    }

    static inline int cpumask_parse_user(const char __user *buf, int len,
    struct cpumask *dstp)
    {
    - return bitmap_parse_user(buf, len, dstp->bits, nr_cpumask_bits);
    + return bitmap_parse_user(buf, len, cpumask_bits(dstp), nr_cpumask_bits);
    }

    static inline int cpulist_scnprintf(char *buf, int len,
    const struct cpumask *srcp)
    {
    - return bitmap_scnlistprintf(buf, len, srcp->bits, nr_cpumask_bits);
    + return bitmap_scnlistprintf(buf, len, cpumask_bits(srcp),
    + nr_cpumask_bits);
    }

    static inline int cpulist_parse(const char *buf, struct cpumask *dstp)
    {
    - return bitmap_parselist(buf, dstp->bits, nr_cpumask_bits);
    + return bitmap_parselist(buf, cpumask_bits(dstp), nr_cpumask_bits);
    }

    static inline int cpumask_cpuremap(int oldbit,
    const struct cpumask *oldp,
    const struct cpumask *newp)
    {
    - return bitmap_bitremap(oldbit, oldp->bits, newp->bits, nr_cpumask_bits);
    + return bitmap_bitremap(oldbit, cpumask_bits(oldp), cpumask_bits(newp),
    + nr_cpumask_bits);
    }

    static inline void cpumask_remap(struct cpumask *dstp,
    @@ -338,21 +350,23 @@ static inline void cpumask_remap(struct
    const struct cpumask *oldp,
    const struct cpumask *newp)
    {
    - bitmap_remap(dstp->bits, srcp->bits, oldp->bits, newp->bits,
    - nr_cpumask_bits);
    + bitmap_remap(cpumask_bits(dstp), cpumask_bits(srcp),
    + cpumask_bits(oldp), cpumask_bits(newp), nr_cpumask_bits);
    }

    static inline void cpumask_onto(struct cpumask *dstp,
    const struct cpumask *origp,
    const struct cpumask *relmapp)
    {
    - bitmap_onto(dstp->bits, origp->bits, relmapp->bits, nr_cpumask_bits);
    + bitmap_onto(cpumask_bits(dstp), cpumask_bits(origp),
    + cpumask_bits(relmapp), nr_cpumask_bits);
    }

    static inline void cpumask_fold(struct cpumask *dstp,
    const struct cpumask *origp, int sz)
    {
    - bitmap_fold(dstp->bits, origp->bits, sz, nr_cpumask_bits);
    + bitmap_fold(cpumask_bits(dstp), cpumask_bits(origp), sz,
    + nr_cpumask_bits);
    }

    static inline void cpumask_copy(struct cpumask *dstp,
    --- linux-2.6.28.orig/include/linux/seq_file.h
    +++ linux-2.6.28/include/linux/seq_file.h
    @@ -52,7 +52,7 @@ int seq_path_root(struct seq_file *m, st
    int seq_bitmap(struct seq_file *m, unsigned long *bits, unsigned int nr_bits);
    static inline int seq_cpumask(struct seq_file *m, cpumask_t *mask)
    {
    - return seq_bitmap(m, mask->bits, nr_cpu_ids);
    + return seq_bitmap(m, cpumask_bits(mask), nr_cpu_ids);
    }

    static inline int seq_nodemask(struct seq_file *m, nodemask_t *mask)
    --- linux-2.6.28.orig/kernel/time/timer_list.c
    +++ linux-2.6.28/kernel/time/timer_list.c
    @@ -232,10 +232,10 @@ static void timer_list_show_tickdevices(
    #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
    print_tickdevice(m, tick_get_broadcast_device(), -1);
    SEQ_printf(m, "tick_broadcast_mask: %08lx\n",
    - tick_get_broadcast_mask()->bits[0]);
    + cpumask_bits(tick_get_broadcast_mask())[0]);
    #ifdef CONFIG_TICK_ONESHOT
    SEQ_printf(m, "tick_broadcast_oneshot_mask: %08lx\n",
    - tick_get_broadcast_oneshot_mask()->bits[0]);
    + cpumask_bits(tick_get_broadcast_oneshot_mask())[0]);
    #endif
    SEQ_printf(m, "\n");
    #endif
    --- linux-2.6.28.orig/lib/cpumask.c
    +++ linux-2.6.28/lib/cpumask.c
    @@ -5,13 +5,13 @@

    int __first_cpu(const cpumask_t *srcp)
    {
    - return find_first_bit(srcp->bits, nr_cpumask_bits);
    + return find_first_bit(cpumask_bits(srcp), nr_cpumask_bits);
    }
    EXPORT_SYMBOL(__first_cpu);

    int __next_cpu(int n, const cpumask_t *srcp)
    {
    - return find_next_bit(srcp->bits, nr_cpumask_bits, n+1);
    + return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
    }
    EXPORT_SYMBOL(__next_cpu);


    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  7. [PATCH 23/35] cpumask: cpumask_any_but() From: Rusty Russell <rusty@rustcorp.com.au>

    There's a common case where we want any online cpu except a particular
    one. This creates a helper to do that, otherwise we need a temp var
    and cpumask_andnot().

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    include/linux/cpumask.h | 3 +++
    lib/cpumask.c | 10 ++++++++++
    2 files changed, 13 insertions(+)

    --- linux-2.6.28.orig/include/linux/cpumask.h
    +++ linux-2.6.28/include/linux/cpumask.h
    @@ -110,6 +110,7 @@
    *
    * int cpumask_any(mask) Any cpu in mask
    * int cpumask_any_and(mask1,mask2) Any cpu in both masks
    + * int cpumask_any_but(mask,cpu) Any cpu in mask except cpu
    *
    * for_each_possible_cpu(cpu) for-loop cpu over cpu_possible_map
    * for_each_online_cpu(cpu) for-loop cpu over cpu_online_map
    @@ -451,6 +452,7 @@ extern cpumask_t cpu_mask_all;
    #define cpumask_first(src) ({ (void)(src); 0; })
    #define cpumask_next(n, src) ({ (void)(src); 1; })
    #define cpumask_next_and(n, srcp, andp) ({ (void)(srcp), (void)(andp); 1; })
    +#define cpumask_any_but(mask, cpu) ({ (void)(mask); (void)(cpu); 0; })

    #define for_each_cpu(cpu, mask) \
    for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
    @@ -462,6 +464,7 @@ extern cpumask_t cpu_mask_all;
    int cpumask_first(const cpumask_t *srcp);
    int cpumask_next(int n, const cpumask_t *srcp);
    int cpumask_next_and(int n, const cpumask_t *srcp, const cpumask_t *andp);
    +int cpumask_any_but(const struct cpumask *mask, unsigned int cpu);

    #define for_each_cpu(cpu, mask) \
    for ((cpu) = -1; \
    --- linux-2.6.28.orig/lib/cpumask.c
    +++ linux-2.6.28/lib/cpumask.c
    @@ -24,6 +24,16 @@ int cpumask_next_and(int n, const cpumas
    }
    EXPORT_SYMBOL(cpumask_next_and);

    +int cpumask_any_but(const struct cpumask *mask, unsigned int cpu)
    +{
    + unsigned int i;
    +
    + for_each_cpu(i, mask)
    + if (i != cpu)
    + break;
    + return i;
    +}
    +
    /* These are not inline because of header tangles. */
    #ifdef CONFIG_CPUMASK_OFFSTACK
    bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags)

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  8. [PATCH 34/35] cpumask: Use accessors code. From: Rusty Russell <rusty@rustcorp.com.au>

    Use the accessors rather than frobbing bits directly. Most of this is
    in arch code I haven't even compiled, but it is mostly straightforward.

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    arch/alpha/kernel/process.c | 4 +--
    arch/alpha/kernel/smp.c | 7 +++---
    arch/arm/kernel/smp.c | 6 ++---
    arch/arm/mach-realview/platsmp.c | 4 +--
    arch/cris/arch-v32/kernel/smp.c | 10 ++++-----
    arch/ia64/kernel/acpi.c | 6 ++---
    arch/ia64/kernel/setup.c | 2 -
    arch/ia64/kernel/smp.c | 2 -
    arch/ia64/kernel/smpboot.c | 30 +++++++++++----------------
    arch/m32r/kernel/smp.c | 2 -
    arch/m32r/kernel/smpboot.c | 6 ++---
    arch/mips/kernel/smp-cmp.c | 8 ++++---
    arch/mips/kernel/smp-mt.c | 2 -
    arch/mips/kernel/smp.c | 10 ++++-----
    arch/mips/kernel/smtc.c | 6 ++---
    arch/mips/pmc-sierra/yosemite/smp.c | 5 +---
    arch/mips/sgi-ip27/ip27-smp.c | 2 -
    arch/mips/sibyte/bcm1480/smp.c | 5 +---
    arch/mips/sibyte/sb1250/smp.c | 5 +---
    arch/parisc/kernel/processor.c | 2 -
    arch/parisc/kernel/smp.c | 12 +++++-----
    arch/powerpc/kernel/setup-common.c | 6 ++---
    arch/powerpc/kernel/smp.c | 6 ++---
    arch/powerpc/platforms/powermac/setup.c | 2 -
    arch/powerpc/platforms/powermac/smp.c | 4 +--
    arch/powerpc/platforms/pseries/hotplug-cpu.c | 6 ++---
    arch/s390/kernel/smp.c | 19 ++++++++---------
    arch/sh/kernel/cpu/sh4a/smp-shx3.c | 5 +---
    arch/sh/kernel/smp.c | 10 ++++-----
    arch/sparc/kernel/smp.c | 8 +++----
    arch/sparc/kernel/sun4d_smp.c | 2 -
    arch/sparc/kernel/sun4m_smp.c | 2 -
    arch/sparc64/kernel/mdesc.c | 4 +--
    arch/sparc64/kernel/prom.c | 4 +--
    arch/sparc64/kernel/smp.c | 6 ++---
    arch/um/kernel/skas/process.c | 2 -
    arch/um/kernel/smp.c | 10 ++++-----
    arch/x86/kernel/acpi/boot.c | 2 -
    arch/x86/kernel/apic.c | 4 +--
    arch/x86/kernel/smp.c | 2 -
    arch/x86/kernel/smpboot.c | 12 +++++-----
    arch/x86/mach-voyager/voyager_smp.c | 16 +++++++-------
    arch/x86/xen/smp.c | 8 +++----
    init/main.c | 6 ++---
    44 files changed, 138 insertions(+), 144 deletions(-)

    --- linux-2.6.28.orig/arch/alpha/kernel/process.c
    +++ linux-2.6.28/arch/alpha/kernel/process.c
    @@ -93,7 +93,7 @@ common_shutdown_1(void *generic_ptr)
    if (cpuid != boot_cpuid) {
    flags |= 0x00040000UL; /* "remain halted" */
    *pflags = flags;
    - cpu_clear(cpuid, cpu_present_map);
    + set_cpu_present(cpuid, false);
    halt();
    }
    #endif
    @@ -119,7 +119,7 @@ common_shutdown_1(void *generic_ptr)

    #ifdef CONFIG_SMP
    /* Wait for the secondaries to halt. */
    - cpu_clear(boot_cpuid, cpu_present_map);
    + set_cpu_present(boot_cpuid, false);
    while (cpus_weight(cpu_present_map))
    barrier();
    #endif
    --- linux-2.6.28.orig/arch/alpha/kernel/smp.c
    +++ linux-2.6.28/arch/alpha/kernel/smp.c
    @@ -121,10 +121,11 @@ smp_callin(void)
    {
    int cpuid = hard_smp_processor_id();

    - if (cpu_test_and_set(cpuid, cpu_online_map)) {
    + if (cpu_isset(cpuid, cpu_online_map)) {
    printk("??, cpu 0x%x already present??\n", cpuid);
    BUG();
    }
    + set_cpu_online(cpuid, true);

    /* Turn on machine checks. */
    wrmces(7);
    @@ -435,7 +436,7 @@ setup_smp(void)
    ((char *)cpubase + i*hwrpb->processor_size);
    if ((cpu->flags & 0x1cc) == 0x1cc) {
    smp_num_probed++;
    - cpu_set(i, cpu_present_map);
    + set_cpu_present(i, true);
    cpu->pal_revision = boot_cpu_palrev;
    }

    @@ -468,7 +469,7 @@ smp_prepare_cpus(unsigned int max_cpus)

    /* Nothing to do on a UP box, or when told not to. */
    if (smp_num_probed == 1 || max_cpus == 0) {
    - cpu_present_map = cpumask_of_cpu(boot_cpuid);
    + init_cpu_present(cpumask_of(boot_cpuid));
    printk(KERN_INFO "SMP mode deactivated.\n");
    return;
    }
    --- linux-2.6.28.orig/arch/arm/kernel/smp.c
    +++ linux-2.6.28/arch/arm/kernel/smp.c
    @@ -161,7 +161,7 @@ int __cpuexit __cpu_disable(void)
    * Take this CPU offline. Once we clear this, we can't return,
    * and we must not schedule until we're ready to give up the cpu.
    */
    - cpu_clear(cpu, cpu_online_map);
    + set_cpu_online(cpu, false);

    /*
    * OK - migrate IRQs away from this CPU
    @@ -283,7 +283,7 @@ asmlinkage void __cpuinit secondary_star
    /*
    * OK, now it's safe to let the boot CPU continue
    */
    - cpu_set(cpu, cpu_online_map);
    + set_cpu_online(cpu, true);

    /*
    * OK, it's off to the idle thread for us
    @@ -415,7 +415,7 @@ static void ipi_cpu_stop(unsigned int cp
    dump_stack();
    spin_unlock(&stop_lock);

    - cpu_clear(cpu, cpu_online_map);
    + set_cpu_online(cpu, false);

    local_fiq_disable();
    local_irq_disable();
    --- linux-2.6.28.orig/arch/arm/mach-realview/platsmp.c
    +++ linux-2.6.28/arch/arm/mach-realview/platsmp.c
    @@ -193,7 +193,7 @@ void __init smp_init_cpus(void)
    unsigned int i, ncores = get_core_count();

    for (i = 0; i < ncores; i++)
    - cpu_set(i, cpu_possible_map);
    + set_cpu_possible(i, true);
    }

    void __init smp_prepare_cpus(unsigned int max_cpus)
    @@ -242,7 +242,7 @@ void __init smp_prepare_cpus(unsigned in
    * actually populated at the present time.
    */
    for (i = 0; i < max_cpus; i++)
    - cpu_set(i, cpu_present_map);
    + set_cpu_present(i, true);

    /*
    * Initialise the SCU if there are more than one CPU and let
    --- linux-2.6.28.orig/arch/cris/arch-v32/kernel/smp.c
    +++ linux-2.6.28/arch/cris/arch-v32/kernel/smp.c
    @@ -98,9 +98,9 @@ void __devinit smp_prepare_boot_cpu(void
    SUPP_BANK_SEL(2);
    SUPP_REG_WR(RW_MM_TLB_PGD, pgd);

    - cpu_set(0, cpu_online_map);
    + set_cpu_online(0, true);
    cpu_set(0, phys_cpu_present_map);
    - cpu_set(0, cpu_possible_map);
    + set_cpu_possible(0, true);
    }

    void __init smp_cpus_done(unsigned int max_cpus)
    @@ -126,10 +126,10 @@ smp_boot_one_cpu(int cpuid)
    cpu_now_booting = cpuid;

    /* Kick it */
    - cpu_set(cpuid, cpu_online_map);
    + set_cpu_online(cpuid, true);
    cpu_set(cpuid, cpu_mask);
    send_ipi(IPI_BOOT, 0, cpu_mask);
    - cpu_clear(cpuid, cpu_online_map);
    + set_cpu_online(cpuid, false);

    /* Wait for CPU to come online */
    for (timeout = 0; timeout < 10000; timeout++) {
    @@ -177,7 +177,7 @@ void __init smp_callin(void)
    notify_cpu_starting(cpu);
    local_irq_enable();

    - cpu_set(cpu, cpu_online_map);
    + set_cpu_online(cpu, true);
    cpu_idle();
    }

    --- linux-2.6.28.orig/arch/ia64/kernel/acpi.c
    +++ linux-2.6.28/arch/ia64/kernel/acpi.c
    @@ -845,7 +845,7 @@ __init void prefill_possible_map(void)
    possible, max((possible - available_cpus), 0));

    for (i = 0; i < possible; i++)
    - cpu_set(i, cpu_possible_map);
    + set_cpu_possible(i, true);
    }

    int acpi_map_lsapic(acpi_handle handle, int *pcpu)
    @@ -890,7 +890,7 @@ int acpi_map_lsapic(acpi_handle handle,

    acpi_map_cpu2node(handle, cpu, physid);

    - cpu_set(cpu, cpu_present_map);
    + set_cpu_present(cpu, true);
    ia64_cpu_to_sapicid[cpu] = physid;

    *pcpu = cpu;
    @@ -902,7 +902,7 @@ EXPORT_SYMBOL(acpi_map_lsapic);
    int acpi_unmap_lsapic(int cpu)
    {
    ia64_cpu_to_sapicid[cpu] = -1;
    - cpu_clear(cpu, cpu_present_map);
    + set_cpu_present(cpu, false);

    #ifdef CONFIG_ACPI_NUMA
    /* NUMA specific cleanup's */
    --- linux-2.6.28.orig/arch/ia64/kernel/setup.c
    +++ linux-2.6.28/arch/ia64/kernel/setup.c
    @@ -466,7 +466,7 @@ mark_bsp_online (void)
    {
    #ifdef CONFIG_SMP
    /* If we register an early console, allow CPU 0 to printk */
    - cpu_set(smp_processor_id(), cpu_online_map);
    + set_cpu_online(smp_processor_id(), true);
    #endif
    }

    --- linux-2.6.28.orig/arch/ia64/kernel/smp.c
    +++ linux-2.6.28/arch/ia64/kernel/smp.c
    @@ -76,7 +76,7 @@ stop_this_cpu(void)
    /*
    * Remove this CPU:
    */
    - cpu_clear(smp_processor_id(), cpu_online_map);
    + set_cpu_online(smp_processor_id(), false);
    max_xtp();
    local_irq_disable();
    cpu_halt();
    --- linux-2.6.28.orig/arch/ia64/kernel/smpboot.c
    +++ linux-2.6.28/arch/ia64/kernel/smpboot.c
    @@ -396,7 +396,7 @@ smp_callin (void)
    /* Setup the per cpu irq handling data structures */
    __setup_vector_irq(cpuid);
    notify_cpu_starting(cpuid);
    - cpu_set(cpuid, cpu_online_map);
    + set_cpu_online(cpuid, true);
    per_cpu(cpu_state, cpuid) = CPU_ONLINE;
    spin_unlock(&vector_lock);
    ipi_call_unlock_irq();
    @@ -550,7 +550,7 @@ do_rest:
    if (!cpu_isset(cpu, cpu_callin_map)) {
    printk(KERN_ERR "Processor 0x%x/0x%x is stuck.\n", cpu, sapicid);
    ia64_cpu_to_sapicid[cpu] = -1;
    - cpu_clear(cpu, cpu_online_map); /* was set in smp_callin() */
    + set_cpu_online(cpu, false); /* was set in smp_callin() */
    return -EINVAL;
    }
    return 0;
    @@ -580,15 +580,14 @@ smp_build_cpu_map (void)
    }

    ia64_cpu_to_sapicid[0] = boot_cpu_id;
    - cpus_clear(cpu_present_map);
    - cpu_set(0, cpu_present_map);
    - cpu_set(0, cpu_possible_map);
    + init_cpu_present(cpumask_of(0));
    + set_cpu_possible(0, true);
    for (cpu = 1, i = 0; i < smp_boot_data.cpu_count; i++) {
    sapicid = smp_boot_data.cpu_phys_id[i];
    if (sapicid == boot_cpu_id)
    continue;
    - cpu_set(cpu, cpu_present_map);
    - cpu_set(cpu, cpu_possible_map);
    + set_cpu_present(cpu, true);
    + set_cpu_possible(cpu, true);
    ia64_cpu_to_sapicid[cpu] = sapicid;
    cpu++;
    }
    @@ -611,7 +610,7 @@ smp_prepare_cpus (unsigned int max_cpus)
    /*
    * We have the boot CPU online for sure.
    */
    - cpu_set(0, cpu_online_map);
    + set_cpu_online(0, true);
    cpu_set(0, cpu_callin_map);

    local_cpu_data->loops_per_jiffy = loops_per_jiffy;
    @@ -626,19 +625,16 @@ smp_prepare_cpus (unsigned int max_cpus)
    */
    if (!max_cpus) {
    printk(KERN_INFO "SMP mode deactivated.\n");
    - cpus_clear(cpu_online_map);
    - cpus_clear(cpu_present_map);
    - cpus_clear(cpu_possible_map);
    - cpu_set(0, cpu_online_map);
    - cpu_set(0, cpu_present_map);
    - cpu_set(0, cpu_possible_map);
    + init_cpu_online(cpumask_of(0));
    + init_cpu_present(cpumask_of(0));
    + init_cpu_possible(cpumask_of(0));
    return;
    }
    }

    void __devinit smp_prepare_boot_cpu(void)
    {
    - cpu_set(smp_processor_id(), cpu_online_map);
    + set_cpu_online(smp_processor_id(), true);
    cpu_set(smp_processor_id(), cpu_callin_map);
    per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
    paravirt_post_smp_prepare_boot_cpu();
    @@ -737,13 +733,13 @@ int __cpu_disable(void)
    }

    if (migrate_platform_irqs(cpu)) {
    - cpu_set(cpu, cpu_online_map);
    + set_cpu_online(cpu, true);
    return (-EBUSY);
    }

    remove_siblinginfo(cpu);
    fixup_irqs();
    - cpu_clear(cpu, cpu_online_map);
    + set_cpu_online(cpu, false);
    local_flush_tlb_all();
    cpu_clear(cpu, cpu_callin_map);
    return 0;
    --- linux-2.6.28.orig/arch/m32r/kernel/smp.c
    +++ linux-2.6.28/arch/m32r/kernel/smp.c
    @@ -531,7 +531,7 @@ static void stop_this_cpu(void *dummy)
    /*
    * Remove this CPU:
    */
    - cpu_clear(cpu_id, cpu_online_map);
    + set_cpu_online(cpu_id, false);

    /*
    * PSW IE = 1;
    --- linux-2.6.28.orig/arch/m32r/kernel/smpboot.c
    +++ linux-2.6.28/arch/m32r/kernel/smpboot.c
    @@ -135,7 +135,7 @@ void __devinit smp_prepare_boot_cpu(void
    {
    bsp_phys_id = hard_smp_processor_id();
    physid_set(bsp_phys_id, phys_cpu_present_map);
    - cpu_set(0, cpu_online_map); /* BSP's cpu_id == 0 */
    + set_cpu_online(0, true); /* BSP's cpu_id == 0 */
    cpu_set(0, cpu_callout_map);
    cpu_set(0, cpu_callin_map);

    @@ -178,7 +178,7 @@ void __init smp_prepare_cpus(unsigned in
    for (phys_id = 0 ; phys_id < nr_cpu ; phys_id++)
    physid_set(phys_id, phys_cpu_present_map);
    #ifndef CONFIG_HOTPLUG_CPU
    - cpu_present_map = cpu_possible_map;
    + init_cpu_present(&cpu_possible_map);
    #endif

    show_mp_info(nr_cpu);
    @@ -503,7 +503,7 @@ static void __init smp_online(void)
    /* Save our processor parameters */
    smp_store_cpu_info(cpu_id);

    - cpu_set(cpu_id, cpu_online_map);
    + set_cpu_online(cpu_id, true);
    }

    /*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= *=*=*=*=*=*=*=*=*=*=*=*=*=*/
    --- linux-2.6.28.orig/arch/mips/kernel/smp-cmp.c
    +++ linux-2.6.28/arch/mips/kernel/smp-cmp.c
    @@ -52,8 +52,10 @@ static int __init allowcpus(char *str)

    cpus_clear(cpu_allow_map);
    if (cpulist_parse(str, &cpu_allow_map) == 0) {
    - cpu_set(0, cpu_allow_map);
    - cpus_and(cpu_possible_map, cpu_possible_map, cpu_allow_map);
    + unsigned int i;
    + for (i = 1; i < nr_cpu_ids; i++)
    + if (!cpumask_test_cpu(i, cpu_allow_map))
    + set_cpu_possible(i, false);
    len = cpulist_scnprintf(buf, sizeof(buf)-1, &cpu_possible_map);
    buf[len] = '\0';
    pr_debug("Allowable CPUs: %s\n", buf);
    @@ -226,7 +228,7 @@ void __init cmp_smp_setup(void)

    for (i = 1; i < nr_cpu_ids; i++) {
    if (amon_cpu_avail(i)) {
    - cpu_set(i, cpu_possible_map);
    + set_cpu_possible(i, true);
    __cpu_number_map[i] = ++ncpu;
    __cpu_logical_map[ncpu] = i;
    }
    --- linux-2.6.28.orig/arch/mips/kernel/smp-mt.c
    +++ linux-2.6.28/arch/mips/kernel/smp-mt.c
    @@ -70,7 +70,7 @@ static unsigned int __init smvp_vpe_init
    write_vpe_c0_vpeconf0(tmp);

    /* Record this as available CPU */
    - cpu_set(tc, cpu_possible_map);
    + set_cpu_possible(tc, true);
    __cpu_number_map[tc] = ++ncpu;
    __cpu_logical_map[ncpu] = tc;
    }
    --- linux-2.6.28.orig/arch/mips/kernel/smp.c
    +++ linux-2.6.28/arch/mips/kernel/smp.c
    @@ -157,7 +157,7 @@ static void stop_this_cpu(void *dummy)
    /*
    * Remove this CPU:
    */
    - cpu_clear(smp_processor_id(), cpu_online_map);
    + set_cpu_online(smp_processor_id(), false);
    local_irq_enable(); /* May need to service _machine_restart IPI */
    for (;; /* Wait if available. */
    }
    @@ -181,7 +181,7 @@ void __init smp_prepare_cpus(unsigned in
    mp_ops->prepare_cpus(max_cpus);
    set_cpu_sibling_map(0);
    #ifndef CONFIG_HOTPLUG_CPU
    - cpu_present_map = cpu_possible_map;
    + init_cpu_present(&cpu_possible_map);
    #endif
    }

    @@ -194,8 +194,8 @@ void __devinit smp_prepare_boot_cpu(void
    */
    __cpu_number_map[0] = 0;
    __cpu_logical_map[0] = 0;
    - cpu_set(0, cpu_possible_map);
    - cpu_set(0, cpu_online_map);
    + set_cpu_possible(0, true);
    + set_cpu_online(0, true);
    cpu_set(0, cpu_callin_map);
    }

    @@ -225,7 +225,7 @@ int __cpuinit __cpu_up(unsigned int cpu)
    while (!cpu_isset(cpu, cpu_callin_map))
    udelay(100);

    - cpu_set(cpu, cpu_online_map);
    + set_cpu_online(cpu, true);

    return 0;
    }
    --- linux-2.6.28.orig/arch/mips/kernel/smtc.c
    +++ linux-2.6.28/arch/mips/kernel/smtc.c
    @@ -304,7 +304,7 @@ int __init smtc_build_cpu_map(int start_
    */
    ntcs = ((read_c0_mvpconf0() & MVPCONF0_PTC) >> MVPCONF0_PTC_SHIFT) + 1;
    for (i = start_cpu_slot; i < nr_cpu_ids && i < ntcs; i++) {
    - cpu_set(i, cpu_possible_map);
    + set_cpu_possible(i, true);
    __cpu_number_map[i] = i;
    __cpu_logical_map[i] = i;
    }
    @@ -521,8 +521,8 @@ void smtc_prepare_cpus(int cpus)
    * Pull any physically present but unused TCs out of circulation.
    */
    while (tc < (((val & MVPCONF0_PTC) >> MVPCONF0_PTC_SHIFT) + 1)) {
    - cpu_clear(tc, cpu_possible_map);
    - cpu_clear(tc, cpu_present_map);
    + set_cpu_possible(tc, false);
    + set_cpu_present(tc, false);
    tc++;
    }

    --- linux-2.6.28.orig/arch/mips/pmc-sierra/yosemite/smp.c
    +++ linux-2.6.28/arch/mips/pmc-sierra/yosemite/smp.c
    @@ -150,10 +150,9 @@ static void __init yos_smp_setup(void)
    {
    int i;

    - cpus_clear(cpu_possible_map);
    -
    + init_cpu_possible(cpumask_of(0));
    for (i = 0; i < 2; i++) {
    - cpu_set(i, cpu_possible_map);
    + set_cpu_possible(i, true);
    __cpu_number_map[i] = i;
    __cpu_logical_map[i] = i;
    }
    --- linux-2.6.28.orig/arch/mips/sgi-ip27/ip27-smp.c
    +++ linux-2.6.28/arch/mips/sgi-ip27/ip27-smp.c
    @@ -76,7 +76,7 @@ static int do_cpumask(cnodeid_t cnode, n
    /* Only let it join in if it's marked enabled */
    if ((acpu->cpu_info.flags & KLINFO_ENABLE) &&
    (tot_cpus_found != NR_CPUS)) {
    - cpu_set(cpuid, cpu_possible_map);
    + set_cpu_possible(cpuid, true);
    alloc_cpupda(cpuid, tot_cpus_found);
    cpus_found++;
    tot_cpus_found++;
    --- linux-2.6.28.orig/arch/mips/sibyte/bcm1480/smp.c
    +++ linux-2.6.28/arch/mips/sibyte/bcm1480/smp.c
    @@ -145,14 +145,13 @@ static void __init bcm1480_smp_setup(voi
    {
    int i, num;

    - cpus_clear(cpu_possible_map);
    - cpu_set(0, cpu_possible_map);
    + init_cpu_possible(cpumask_of(0));
    __cpu_number_map[0] = 0;
    __cpu_logical_map[0] = 0;

    for (i = 1, num = 0; i < nr_cpu_ids; i++) {
    if (cfe_cpu_stop(i) == 0) {
    - cpu_set(i, cpu_possible_map);
    + set_cpu_possible(i, true);
    __cpu_number_map[i] = ++num;
    __cpu_logical_map[num] = i;
    }
    --- linux-2.6.28.orig/arch/mips/sibyte/sb1250/smp.c
    +++ linux-2.6.28/arch/mips/sibyte/sb1250/smp.c
    @@ -133,14 +133,13 @@ static void __init sb1250_smp_setup(void
    {
    int i, num;

    - cpus_clear(cpu_possible_map);
    - cpu_set(0, cpu_possible_map);
    + init_cpu_possible(cpumask_of(0));
    __cpu_number_map[0] = 0;
    __cpu_logical_map[0] = 0;

    for (i = 1, num = 0; i < nr_cpu_ids; i++) {
    if (cfe_cpu_stop(i) == 0) {
    - cpu_set(i, cpu_possible_map);
    + set_cpu_possible(i, true);
    __cpu_number_map[i] = ++num;
    __cpu_logical_map[num] = i;
    }
    --- linux-2.6.28.orig/arch/parisc/kernel/processor.c
    +++ linux-2.6.28/arch/parisc/kernel/processor.c
    @@ -200,7 +200,7 @@ static int __cpuinit processor_probe(str
    */
    #ifdef CONFIG_SMP
    if (cpuid) {
    - cpu_set(cpuid, cpu_present_map);
    + set_cpu_present(cpuid, true);
    cpu_up(cpuid);
    }
    #endif
    --- linux-2.6.28.orig/arch/parisc/kernel/smp.c
    +++ linux-2.6.28/arch/parisc/kernel/smp.c
    @@ -112,7 +112,7 @@ halt_processor(void)
    {
    /* REVISIT : redirect I/O Interrupts to another CPU? */
    /* REVISIT : does PM *know* this CPU isn't available? */
    - cpu_clear(smp_processor_id(), cpu_online_map);
    + set_cpu_online(smp_processor_id(), false);
    local_irq_disable();
    for (;
    ;
    @@ -298,13 +298,14 @@ smp_cpu_init(int cpunum)
    mb();

    /* Well, support 2.4 linux scheme as well. */
    - if (cpu_test_and_set(cpunum, cpu_online_map))
    + if (cpu_isset(cpunum, cpu_online_map))
    {
    extern void machine_halt(void); /* arch/parisc.../process.c */

    printk(KERN_CRIT "CPU#%d already initialized!\n", cpunum);
    machine_halt();
    }
    + set_cpu_online(cpunum, true);

    /* Initialise the idle task for this CPU */
    atomic_inc(&init_mm.mm_count);
    @@ -426,8 +427,8 @@ void __devinit smp_prepare_boot_cpu(void
    /* Setup BSP mappings */
    printk("SMP: bootstrap CPU ID is %d\n",bootstrap_processor);

    - cpu_set(bootstrap_processor, cpu_online_map);
    - cpu_set(bootstrap_processor, cpu_present_map);
    + set_cpu_online(bootstrap_processor, true);
    + set_cpu_present(bootstrap_processor, true);
    }


    @@ -438,8 +439,7 @@ void __devinit smp_prepare_boot_cpu(void
    */
    void __init smp_prepare_cpus(unsigned int max_cpus)
    {
    - cpus_clear(cpu_present_map);
    - cpu_set(0, cpu_present_map);
    + init_cpu_present(cpumask_of(0));

    parisc_max_cpus = max_cpus;
    if (!max_cpus)
    --- linux-2.6.28.orig/arch/powerpc/kernel/setup-common.c
    +++ linux-2.6.28/arch/powerpc/kernel/setup-common.c
    @@ -424,9 +424,9 @@ void __init smp_setup_cpu_maps(void)
    for (j = 0; j < nthreads && cpu < nr_cpu_ids; j++) {
    DBG(" thread %d -> cpu %d (hard id %d)\n",
    j, cpu, intserv[j]);
    - cpu_set(cpu, cpu_present_map);
    + set_cpu_present(cpu, true);
    set_hard_smp_processor_id(cpu, intserv[j]);
    - cpu_set(cpu, cpu_possible_map);
    + set_cpu_possible(cpu, true);
    cpu++;
    }
    }
    @@ -472,7 +472,7 @@ void __init smp_setup_cpu_maps(void)
    maxcpus);

    for (cpu = 0; cpu < maxcpus; cpu++)
    - cpu_set(cpu, cpu_possible_map);
    + set_cpu_possible(cpu, true);
    out:
    of_node_put(dn);
    }
    --- linux-2.6.28.orig/arch/powerpc/kernel/smp.c
    +++ linux-2.6.28/arch/powerpc/kernel/smp.c
    @@ -225,7 +225,7 @@ void __devinit smp_prepare_boot_cpu(void
    {
    BUG_ON(smp_processor_id() != boot_cpuid);

    - cpu_set(boot_cpuid, cpu_online_map);
    + set_cpu_online(boot_cpuid, true);
    cpu_set(boot_cpuid, per_cpu(cpu_sibling_map, boot_cpuid));
    cpu_set(boot_cpuid, per_cpu(cpu_core_map, boot_cpuid));
    #ifdef CONFIG_PPC64
    @@ -245,7 +245,7 @@ int generic_cpu_disable(void)
    if (cpu == boot_cpuid)
    return -EBUSY;

    - cpu_clear(cpu, cpu_online_map);
    + set_cpu_online(cpu, false);
    #ifdef CONFIG_PPC64
    vdso_data->processorCount--;
    fixup_irqs(cpu_online_map);
    @@ -299,7 +299,7 @@ void generic_mach_cpu_die(void)
    smp_wmb();
    while (__get_cpu_var(cpu_state) != CPU_UP_PREPARE)
    cpu_relax();
    - cpu_set(cpu, cpu_online_map);
    + set_cpu_online(cpu, true);
    local_irq_enable();
    }
    #endif
    --- linux-2.6.28.orig/arch/powerpc/platforms/powermac/setup.c
    +++ linux-2.6.28/arch/powerpc/platforms/powermac/setup.c
    @@ -366,7 +366,7 @@ static void __init pmac_setup_arch(void)
    int cpu;

    for (cpu = 1; cpu < 4 && cpu < nr_cpu_ids; ++cpu)
    - cpu_set(cpu, cpu_possible_map);
    + set_cpu_possible(cpu, true);
    smp_ops = &psurge_smp_ops;
    }
    #endif
    --- linux-2.6.28.orig/arch/powerpc/platforms/powermac/smp.c
    +++ linux-2.6.28/arch/powerpc/platforms/powermac/smp.c
    @@ -317,7 +317,7 @@ static int __init smp_psurge_probe(void)
    if (ncpus > nr_cpu_ids)
    ncpus = nr_cpu_ids;
    for (i = 1; i < ncpus ; ++i) {
    - cpu_set(i, cpu_present_map);
    + set_cpu_present(i, true);
    set_hard_smp_processor_id(i, i);
    }

    @@ -861,7 +861,7 @@ static void __devinit smp_core99_setup_c

    int smp_core99_cpu_disable(void)
    {
    - cpu_clear(smp_processor_id(), cpu_online_map);
    + set_cpu_online(smp_processor_id(), false);

    /* XXX reset cpu affinity here */
    mpic_cpu_set_priority(0xf);
    --- linux-2.6.28.orig/arch/powerpc/platforms/pseries/hotplug-cpu.c
    +++ linux-2.6.28/arch/powerpc/platforms/pseries/hotplug-cpu.c
    @@ -94,7 +94,7 @@ static int pseries_cpu_disable(void)
    {
    int cpu = smp_processor_id();

    - cpu_clear(cpu, cpu_online_map);
    + set_cpu_online(cpu, false);
    vdso_data->processorCount--;

    /*fix boot_cpuid here*/
    @@ -185,7 +185,7 @@ static int pseries_add_processor(struct

    for_each_cpu_mask(cpu, tmp) {
    BUG_ON(cpu_isset(cpu, cpu_present_map));
    - cpu_set(cpu, cpu_present_map);
    + set_cpu_present(cpu, true);
    set_hard_smp_processor_id(cpu, *intserv++);
    }
    err = 0;
    @@ -217,7 +217,7 @@ static void pseries_remove_processor(str
    if (get_hard_smp_processor_id(cpu) != intserv[i])
    continue;
    BUG_ON(cpu_online(cpu));
    - cpu_clear(cpu, cpu_present_map);
    + set_cpu_present(cpu, false);
    set_hard_smp_processor_id(cpu, -1);
    break;
    }
    --- linux-2.6.28.orig/arch/s390/kernel/smp.c
    +++ linux-2.6.28/arch/s390/kernel/smp.c
    @@ -451,7 +451,7 @@ static int smp_rescan_cpus_sigp(cpumask_
    smp_cpu_polarization[logical_cpu] = POLARIZATION_UNKNWN;
    if (!cpu_stopped(logical_cpu))
    continue;
    - cpu_set(logical_cpu, cpu_present_map);
    + set_cpu_present(logical_cpu, true);
    smp_cpu_state[logical_cpu] = CPU_STATE_CONFIGURED;
    logical_cpu = next_cpu(logical_cpu, avail);
    if (logical_cpu >= nr_cpu_ids)
    @@ -483,7 +483,7 @@ static int smp_rescan_cpus_sclp(cpumask_
    continue;
    __cpu_logical_map[logical_cpu] = cpu_id;
    smp_cpu_polarization[logical_cpu] = POLARIZATION_UNKNWN;
    - cpu_set(logical_cpu, cpu_present_map);
    + set_cpu_present(logical_cpu, true);
    if (cpu >= info->configured)
    smp_cpu_state[logical_cpu] = CPU_STATE_STANDBY;
    else
    @@ -587,7 +587,7 @@ int __cpuinit start_secondary(void *cpuv
    notify_cpu_starting(smp_processor_id());
    /* Mark this cpu as online */
    spin_lock(&call_lock);
    - cpu_set(smp_processor_id(), cpu_online_map);
    + set_cpu_online(smp_processor_id(), true);
    spin_unlock(&call_lock);
    /* Switch on interrupts */
    local_irq_enable();
    @@ -730,9 +730,8 @@ static int __init setup_possible_cpus(ch
    int pcpus, cpu;

    pcpus = simple_strtoul(s, NULL, 0);
    - cpu_possible_map = cpumask_of_cpu(0);
    - for (cpu = 1; cpu < pcpus && cpu < nr_cpu_ids; cpu++)
    - cpu_set(cpu, cpu_possible_map);
    + for (cpu = 0; cpu < pcpus && cpu < nr_cpu_ids; cpu++)
    + set_cpu_possible(cpu, true);
    return 0;
    }
    early_param("possible_cpus", setup_possible_cpus);
    @@ -744,7 +743,7 @@ int __cpu_disable(void)
    struct ec_creg_mask_parms cr_parms;
    int cpu = smp_processor_id();

    - cpu_clear(cpu, cpu_online_map);
    + set_cpu_online(cpu, false);

    /* Disable pfault pseudo page faults on this cpu. */
    pfault_fini();
    @@ -838,8 +837,8 @@ void __init smp_prepare_boot_cpu(void)
    BUG_ON(smp_processor_id() != 0);

    current_thread_info()->cpu = 0;
    - cpu_set(0, cpu_present_map);
    - cpu_set(0, cpu_online_map);
    + set_cpu_present(0, true);
    + set_cpu_online(0, true);
    S390_lowcore.percpu_offset = __per_cpu_offset[0];
    current_set[0] = current;
    smp_cpu_state[0] = CPU_STATE_CONFIGURED;
    @@ -1106,7 +1105,7 @@ int __ref smp_rescan_cpus(void)
    for_each_cpu_mask(cpu, newcpus) {
    rc = smp_add_present_cpu(cpu);
    if (rc)
    - cpu_clear(cpu, cpu_present_map);
    + set_cpu_present(cpu, false);
    }
    rc = 0;
    out:
    --- linux-2.6.28.orig/arch/sh/kernel/cpu/sh4a/smp-shx3.c
    +++ linux-2.6.28/arch/sh/kernel/cpu/sh4a/smp-shx3.c
    @@ -35,8 +35,7 @@ void __init plat_smp_setup(void)
    unsigned int cpu = 0;
    int i, num;

    - cpus_clear(cpu_possible_map);
    - cpu_set(cpu, cpu_possible_map);
    + init_cpu_possible(cpumask_of(cpu));

    __cpu_number_map[0] = 0;
    __cpu_logical_map[0] = 0;
    @@ -46,7 +45,7 @@ void __init plat_smp_setup(void)
    * for the total number of cores.
    */
    for (i = 1, num = 0; i < NR_CPUS; i++) {
    - cpu_set(i, cpu_possible_map);
    + set_cpu_possible(i, true);
    __cpu_number_map[i] = ++num;
    __cpu_logical_map[num] = i;
    }
    --- linux-2.6.28.orig/arch/sh/kernel/smp.c
    +++ linux-2.6.28/arch/sh/kernel/smp.c
    @@ -46,7 +46,7 @@ void __init smp_prepare_cpus(unsigned in
    plat_prepare_cpus(max_cpus);

    #ifndef CONFIG_HOTPLUG_CPU
    - cpu_present_map = cpu_possible_map;
    + init_cpu_present(&cpu_possible_map);
    #endif
    }

    @@ -57,8 +57,8 @@ void __devinit smp_prepare_boot_cpu(void
    __cpu_number_map[0] = cpu;
    __cpu_logical_map[0] = cpu;

    - cpu_set(cpu, cpu_online_map);
    - cpu_set(cpu, cpu_possible_map);
    + set_cpu_online(cpu, true);
    + set_cpu_possible(cpu, true);
    }

    asmlinkage void __cpuinit start_secondary(void)
    @@ -88,7 +88,7 @@ asmlinkage void __cpuinit start_secondar

    smp_store_cpu_info(cpu);

    - cpu_set(cpu, cpu_online_map);
    + set_cpu_online(cpu, true);

    cpu_idle();
    }
    @@ -158,7 +158,7 @@ void smp_send_reschedule(int cpu)

    static void stop_this_cpu(void *unused)
    {
    - cpu_clear(smp_processor_id(), cpu_online_map);
    + set_cpu_online(smp_processor_id(), false);
    local_irq_disable();

    for (;
    --- linux-2.6.28.orig/arch/sparc/kernel/smp.c
    +++ linux-2.6.28/arch/sparc/kernel/smp.c
    @@ -331,8 +331,8 @@ void __init smp_setup_cpu_possible_map(v
    instance = 0;
    while (!cpu_find_by_instance(instance, NULL, &mid)) {
    if (mid < NR_CPUS) {
    - cpu_set(mid, cpu_possible_map);
    - cpu_set(mid, cpu_present_map);
    + set_cpu_possible(mid, true);
    + set_cpu_present(mid, true);
    }
    instance++;
    }
    @@ -350,8 +350,8 @@ void __init smp_prepare_boot_cpu(void)
    printk("boot cpu id != 0, this could work but is untested\n");

    current_thread_info()->cpu = cpuid;
    - cpu_set(cpuid, cpu_online_map);
    - cpu_set(cpuid, cpu_possible_map);
    + set_cpu_online(cpuid, true);
    + set_cpu_possible(cpuid, true);
    }

    int __cpuinit __cpu_up(unsigned int cpu)
    --- linux-2.6.28.orig/arch/sparc/kernel/sun4d_smp.c
    +++ linux-2.6.28/arch/sparc/kernel/sun4d_smp.c
    @@ -150,7 +150,7 @@ void __init smp4d_callin(void)
    spin_lock_irqsave(&sun4d_imsk_lock, flags);
    cc_set_imsk(cc_get_imsk() & ~0x4000); /* Allow PIL 14 as well */
    spin_unlock_irqrestore(&sun4d_imsk_lock, flags);
    - cpu_set(cpuid, cpu_online_map);
    + set_cpu_online(cpuid, true);

    }

    --- linux-2.6.28.orig/arch/sparc/kernel/sun4m_smp.c
    +++ linux-2.6.28/arch/sparc/kernel/sun4m_smp.c
    @@ -112,7 +112,7 @@ void __cpuinit smp4m_callin(void)

    local_irq_enable();

    - cpu_set(cpuid, cpu_online_map);
    + set_cpu_online(cpuid, true);
    }

    /*
    --- linux-2.6.28.orig/arch/sparc64/kernel/mdesc.c
    +++ linux-2.6.28/arch/sparc64/kernel/mdesc.c
    @@ -566,7 +566,7 @@ static void __init report_platform_prope
    max_cpu = NR_CPUS;
    }
    for (i = 0; i < max_cpu; i++)
    - cpu_set(i, cpu_possible_map);
    + set_cpu_possible(i, true);
    }
    #endif

    @@ -826,7 +826,7 @@ void __cpuinit mdesc_fill_in_cpu_data(cp
    }

    #ifdef CONFIG_SMP
    - cpu_set(cpuid, cpu_present_map);
    + set_cpu_present(cpuid, true);
    #endif

    c->core_id = 0;
    --- linux-2.6.28.orig/arch/sparc64/kernel/prom.c
    +++ linux-2.6.28/arch/sparc64/kernel/prom.c
    @@ -1601,8 +1601,8 @@ static void __init of_fill_in_cpu_data(v
    }

    #ifdef CONFIG_SMP
    - cpu_set(cpuid, cpu_present_map);
    - cpu_set(cpuid, cpu_possible_map);
    + set_cpu_present(cpuid, true);
    + set_cpu_possible(cpuid, true);
    #endif
    }

    --- linux-2.6.28.orig/arch/sparc64/kernel/smp.c
    +++ linux-2.6.28/arch/sparc64/kernel/smp.c
    @@ -119,7 +119,7 @@ void __cpuinit smp_callin(void)
    rmb();

    ipi_call_lock();
    - cpu_set(cpuid, cpu_online_map);
    + set_cpu_online(cpuid, true);
    ipi_call_unlock();

    /* idle thread is expected to have preempt disabled */
    @@ -1313,7 +1313,7 @@ int __cpu_disable(void)
    local_irq_disable();

    ipi_call_lock();
    - cpu_clear(cpu, cpu_online_map);
    + set_cpu_online(cpu, false);
    ipi_call_unlock();

    return 0;
    @@ -1339,7 +1339,7 @@ void __cpu_die(unsigned int cpu)
    do {
    hv_err = sun4v_cpu_stop(cpu);
    if (hv_err == HV_EOK) {
    - cpu_clear(cpu, cpu_present_map);
    + set_cpu_present(cpu, false);
    break;
    }
    } while (--limit > 0);
    --- linux-2.6.28.orig/arch/um/kernel/skas/process.c
    +++ linux-2.6.28/arch/um/kernel/skas/process.c
    @@ -41,7 +41,7 @@ static int __init start_kernel_proc(void
    cpu_tasks[0].pid = pid;
    cpu_tasks[0].task = current;
    #ifdef CONFIG_SMP
    - cpu_online_map = cpumask_of_cpu(0);
    + init_cpu_online(cpumask_of(0));
    #endif
    start_kernel();
    return 0;
    --- linux-2.6.28.orig/arch/um/kernel/smp.c
    +++ linux-2.6.28/arch/um/kernel/smp.c
    @@ -79,7 +79,7 @@ static int idle_proc(void *cpup)
    cpu_relax();

    notify_cpu_starting(cpu);
    - cpu_set(cpu, cpu_online_map);
    + set_cpu_online(cpu, true);
    default_idle();
    return 0;
    }
    @@ -111,10 +111,10 @@ void smp_prepare_cpus(unsigned int maxcp
    int i;

    for (i = 0; i < ncpus; ++i)
    - cpu_set(i, cpu_possible_map);
    + set_cpu_possible(i, true);

    - cpu_clear(me, cpu_online_map);
    - cpu_set(me, cpu_online_map);
    + set_cpu_online(me, false);
    + set_cpu_online(me, true);
    cpu_set(me, cpu_callin_map);

    err = os_pipe(cpu_data[me].ipi_pipe, 1, 1);
    @@ -141,7 +141,7 @@ void smp_prepare_cpus(unsigned int maxcp

    void smp_prepare_boot_cpu(void)
    {
    - cpu_set(smp_processor_id(), cpu_online_map);
    + set_cpu_online(smp_processor_id(), true);
    }

    int __cpu_up(unsigned int cpu)
    --- linux-2.6.28.orig/arch/x86/kernel/acpi/boot.c
    +++ linux-2.6.28/arch/x86/kernel/acpi/boot.c
    @@ -597,7 +597,7 @@ EXPORT_SYMBOL(acpi_map_lsapic);
    int acpi_unmap_lsapic(int cpu)
    {
    per_cpu(x86_cpu_to_apicid, cpu) = -1;
    - cpu_clear(cpu, cpu_present_map);
    + set_cpu_present(cpu, false);
    num_processors--;

    return (0);
    --- linux-2.6.28.orig/arch/x86/kernel/apic.c
    +++ linux-2.6.28/arch/x86/kernel/apic.c
    @@ -1903,8 +1903,8 @@ void __cpuinit generic_processor_info(in
    }
    #endif

    - cpu_set(cpu, cpu_possible_map);
    - cpu_set(cpu, cpu_present_map);
    + set_cpu_possible(cpu, true);
    + set_cpu_present(cpu, true);
    }

    #ifdef CONFIG_X86_64
    --- linux-2.6.28.orig/arch/x86/kernel/smp.c
    +++ linux-2.6.28/arch/x86/kernel/smp.c
    @@ -146,7 +146,7 @@ static void stop_this_cpu(void *dummy)
    /*
    * Remove this CPU:
    */
    - cpu_clear(smp_processor_id(), cpu_online_map);
    + set_cpu_online(smp_processor_id(), false);
    disable_local_APIC();
    if (hlt_works(smp_processor_id()))
    for (; halt();
    --- linux-2.6.28.orig/arch/x86/kernel/smpboot.c
    +++ linux-2.6.28/arch/x86/kernel/smpboot.c
    @@ -941,7 +941,7 @@ restore_state:
    numa_remove_cpu(cpu); /* was set by numa_add_cpu */
    cpu_clear(cpu, cpu_callout_map); /* was set by do_boot_cpu() */
    cpu_clear(cpu, cpu_initialized); /* was set by cpu_init() */
    - cpu_clear(cpu, cpu_present_map);
    + set_cpu_present(cpu, false);
    per_cpu(x86_cpu_to_apicid, cpu) = BAD_APICID;
    }

    @@ -1030,8 +1030,8 @@ int __cpuinit native_cpu_up(unsigned int
    */
    static __init void disable_smp(void)
    {
    - cpu_present_map = cpumask_of_cpu(0);
    - cpu_possible_map = cpumask_of_cpu(0);
    + init_cpu_present(cpumask_of(0));
    + init_cpu_possible(cpumask_of(0));
    smpboot_clear_io_apic_irqs();

    if (smp_found_config)
    @@ -1062,14 +1062,14 @@ static int __init smp_sanity_check(unsig
    nr = 0;
    for_each_present_cpu(cpu) {
    if (nr >= 8)
    - cpu_clear(cpu, cpu_present_map);
    + set_cpu_present(cpu, false);
    nr++;
    }

    nr = 0;
    for_each_possible_cpu(cpu) {
    if (nr >= 8)
    - cpu_clear(cpu, cpu_possible_map);
    + set_cpu_possible(cpu, false);
    nr++;
    }

    @@ -1288,7 +1288,7 @@ __init void prefill_possible_map(void)
    possible, max_t(int, possible - num_processors, 0));

    for (i = 0; i < possible; i++)
    - cpu_set(i, cpu_possible_map);
    + set_cpu_possible(i, true);

    nr_cpu_ids = possible;
    }
    --- linux-2.6.28.orig/arch/x86/mach-voyager/voyager_smp.c
    +++ linux-2.6.28/arch/x86/mach-voyager/voyager_smp.c
    @@ -371,7 +371,7 @@ void __init find_smp_config(void)
    cpus_addr(phys_cpu_present_map)[0] |=
    voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESE NT_MASK +
    3) << 24;
    - cpu_possible_map = phys_cpu_present_map;
    + init_cpu_possible(&phys_cpu_present_map);
    printk("VOYAGER SMP: phys_cpu_present_map = 0x%lx\n",
    cpus_addr(phys_cpu_present_map)[0]);
    /* Here we set up the VIC to enable SMP */
    @@ -471,7 +471,7 @@ static void __init start_secondary(void

    local_flush_tlb();

    - cpu_set(cpuid, cpu_online_map);
    + set_cpu_online(cpuid, true);
    wmb();
    cpu_idle();
    }
    @@ -595,7 +595,7 @@ static void __init do_boot_cpu(__u8 cpu)
    print_cpu_info(&cpu_data(cpu));
    wmb();
    cpu_set(cpu, cpu_callout_map);
    - cpu_set(cpu, cpu_present_map);
    + set_cpu_present(cpu, true);
    } else {
    printk("CPU%d FAILED TO BOOT: ", cpu);
    if (*
    @@ -656,7 +656,7 @@ void __init smp_boot_cpus(void)
    /* enable our own CPIs */
    vic_enable_cpi();

    - cpu_set(boot_cpu_id, cpu_online_map);
    + set_cpu_online(boot_cpu_id, true);
    cpu_set(boot_cpu_id, cpu_callout_map);

    /* loop over all the extended VIC CPUs and boot them. The
    @@ -939,7 +939,7 @@ static void smp_enable_irq_interrupt(voi
    static void smp_stop_cpu_function(void *dummy)
    {
    VDEBUG(("VOYAGER SMP: CPU%d is STOPPING\n", smp_processor_id()));
    - cpu_clear(smp_processor_id(), cpu_online_map);
    + set_cpu_online(smp_processor_id(), false);
    local_irq_disable();
    for (;
    halt();
    @@ -1740,10 +1740,10 @@ static void __cpuinit voyager_smp_prepar
    init_gdt(smp_processor_id());
    switch_to_new_gdt();

    - cpu_set(smp_processor_id(), cpu_online_map);
    + set_cpu_online(smp_processor_id(), true);
    cpu_set(smp_processor_id(), cpu_callout_map);
    - cpu_set(smp_processor_id(), cpu_possible_map);
    - cpu_set(smp_processor_id(), cpu_present_map);
    + set_cpu_possible(smp_processor_id(), true);
    + set_cpu_present(smp_processor_id(), true);
    }

    static int __cpuinit voyager_cpu_up(unsigned int cpu)
    --- linux-2.6.28.orig/arch/x86/xen/smp.c
    +++ linux-2.6.28/arch/x86/xen/smp.c
    @@ -77,7 +77,7 @@ static __cpuinit void cpu_bringup(void)

    xen_setup_cpu_clockevents();

    - cpu_set(cpu, cpu_online_map);
    + set_cpu_online(cpu, true);
    x86_write_percpu(cpu_state, CPU_ONLINE);
    wmb();

    @@ -162,7 +162,7 @@ static void __init xen_fill_possible_map
    rc = HYPERVISOR_vcpu_op(VCPUOP_is_up, i, NULL);
    if (rc >= 0) {
    num_processors++;
    - cpu_set(i, cpu_possible_map);
    + set_cpu_possible(i, true);
    }
    }
    }
    @@ -198,7 +198,7 @@ static void __init xen_smp_prepare_cpus(
    while ((num_possible_cpus() > 1) && (num_possible_cpus() > max_cpus)) {
    for (cpu = nr_cpu_ids - 1; !cpu_possible(cpu); cpu--)
    continue;
    - cpu_clear(cpu, cpu_possible_map);
    + set_cpu_possible(cpu, false);
    }

    for_each_possible_cpu (cpu) {
    @@ -211,7 +211,7 @@ static void __init xen_smp_prepare_cpus(
    if (IS_ERR(idle))
    panic("failed fork for CPU %d", cpu);

    - cpu_set(cpu, cpu_present_map);
    + set_cpu_present(cpu, true);
    }
    }

    --- linux-2.6.28.orig/init/main.c
    +++ linux-2.6.28/init/main.c
    @@ -529,9 +529,9 @@ static void __init boot_cpu_init(void)
    {
    int cpu = smp_processor_id();
    /* Mark the boot cpu "present", "online" etc for SMP and UP case */
    - cpu_set(cpu, cpu_online_map);
    - cpu_set(cpu, cpu_present_map);
    - cpu_set(cpu, cpu_possible_map);
    + set_cpu_online(cpu, true);
    + set_cpu_present(cpu, true);
    + set_cpu_possible(cpu, true);
    }

    void __init __weak smp_setup_processor_id(void)

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  9. [PATCH 20/35] cpumask: for_each_cpu(): for_each_cpu_mask which takes a pointer From: Rusty Russell <rusty@rustcorp.com.au>

    We want to wean people off handing around cpumask_t's, and have them
    pass by pointer instead. This does for_each_cpu_mask().

    We immediately convert core files who were doing
    "for_each_cpu_mask(... *mask)" since this is clearer.

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    include/linux/cpumask.h | 25 ++++++++++++++-----------
    kernel/sched.c | 40 ++++++++++++++++++++--------------------
    kernel/workqueue.c | 6 +++---
    lib/cpumask.c | 2 +-
    mm/allocpercpu.c | 4 ++--
    mm/vmstat.c | 4 ++--
    6 files changed, 42 insertions(+), 39 deletions(-)

    --- linux-2.6.28.orig/include/linux/cpumask.h
    +++ linux-2.6.28/include/linux/cpumask.h
    @@ -97,8 +97,8 @@
    * void cpumask_onto(dst, orig, relmap) *dst = orig relative to relmap
    * void cpumask_fold(dst, orig, sz) dst bits = orig bits mod sz
    *
    - * for_each_cpu_mask(cpu, mask) for-loop cpu over mask using nr_cpu_ids
    - * for_each_cpu_mask_and(cpu, mask, and) for-loop cpu over (mask & and).
    + * for_each_cpu(cpu, mask) for-loop cpu over mask, <= nr_cpu_ids
    + * for_each_cpu_and(cpu, mask, and) for-loop cpu over (mask & and).
    *
    * int num_online_cpus() Number of online CPUs
    * int num_possible_cpus() Number of all possible CPUs
    @@ -175,6 +175,9 @@ extern cpumask_t _unused_cpumask_arg_;
    #define cpus_weight_nr(cpumask) cpus_weight(cpumask)
    #define for_each_cpu_mask_nr(cpu, mask) for_each_cpu_mask(cpu, mask)
    #define cpumask_of_cpu(cpu) (*cpumask_of(cpu))
    +#define for_each_cpu_mask(cpu, mask) for_each_cpu(cpu, &(mask))
    +#define for_each_cpu_mask_and(cpu, mask, and) \
    + for_each_cpu_and(cpu, &(mask), &(and))
    /* End deprecated region. */

    #if NR_CPUS > 1
    @@ -443,9 +446,9 @@ extern cpumask_t cpu_mask_all;
    #define cpumask_next_and(n, srcp, andp) ({ (void)(srcp), (void)(andp); 1; })
    #define any_online_cpu(mask) 0

    -#define for_each_cpu_mask(cpu, mask) \
    +#define for_each_cpu(cpu, mask) \
    for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
    -#define for_each_cpu_mask_and(cpu, mask, and) \
    +#define for_each_cpu_and(cpu, mask, and) \
    for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask, (void)and)

    #else /* NR_CPUS > 1 */
    @@ -459,13 +462,13 @@ int __any_online_cpu(const cpumask_t *ma
    #define next_cpu(n, src) __next_cpu((n), &(src))
    #define any_online_cpu(mask) __any_online_cpu(&(mask))

    -#define for_each_cpu_mask(cpu, mask) \
    +#define for_each_cpu(cpu, mask) \
    for ((cpu) = -1; \
    - (cpu) = next_cpu((cpu), (mask)), \
    + (cpu) = __next_cpu((cpu), (mask)), \
    (cpu) < nr_cpu_ids
    -#define for_each_cpu_mask_and(cpu, mask, and) \
    +#define for_each_cpu_and(cpu, mask, and) \
    for ((cpu) = -1; \
    - (cpu) = cpumask_next_and((cpu), &(mask), &(and)), \
    + (cpu) = cpumask_next_and((cpu), (mask), (and)), \
    (cpu) < nr_cpu_ids

    #define num_online_cpus() cpus_weight(cpu_online_map)
    @@ -597,8 +600,8 @@ extern cpumask_t cpu_active_map;

    #define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))

    -#define for_each_possible_cpu(cpu) for_each_cpu_mask((cpu), cpu_possible_map)
    -#define for_each_online_cpu(cpu) for_each_cpu_mask((cpu), cpu_online_map)
    -#define for_each_present_cpu(cpu) for_each_cpu_mask((cpu), cpu_present_map)
    +#define for_each_possible_cpu(cpu) for_each_cpu((cpu), &cpu_possible_map)
    +#define for_each_online_cpu(cpu) for_each_cpu((cpu), &cpu_online_map)
    +#define for_each_present_cpu(cpu) for_each_cpu((cpu), &cpu_present_map)

    #endif /* __LINUX_CPUMASK_H */
    --- linux-2.6.28.orig/kernel/sched.c
    +++ linux-2.6.28/kernel/sched.c
    @@ -1523,7 +1523,7 @@ static int tg_shares_up(struct task_grou
    struct sched_domain *sd = data;
    int i;

    - for_each_cpu_mask(i, sd->span) {
    + for_each_cpu(i, &sd->span) {
    rq_weight += tg->cfs_rq[i]->load.weight;
    shares += tg->cfs_rq[i]->shares;
    }
    @@ -1537,7 +1537,7 @@ static int tg_shares_up(struct task_grou
    if (!rq_weight)
    rq_weight = cpus_weight(sd->span) * NICE_0_LOAD;

    - for_each_cpu_mask(i, sd->span)
    + for_each_cpu(i, &sd->span)
    update_group_shares_cpu(tg, i, shares, rq_weight);

    return 0;
    @@ -2074,7 +2074,7 @@ find_idlest_group(struct sched_domain *s
    /* Tally up the load of all CPUs in the group */
    avg_load = 0;

    - for_each_cpu_mask_nr(i, group->cpumask) {
    + for_each_cpu(i, &group->cpumask) {
    /* Bias balancing toward cpus of our domain */
    if (local_group)
    load = source_load(i, load_idx);
    @@ -2116,7 +2116,7 @@ find_idlest_cpu(struct sched_group *grou
    /* Traverse only the allowed CPUs */
    cpus_and(*tmp, group->cpumask, p->cpus_allowed);

    - for_each_cpu_mask_nr(i, *tmp) {
    + for_each_cpu(i, tmp) {
    load = weighted_cpuload(i);

    if (load < min_load || (load == min_load && i == this_cpu)) {
    @@ -3134,7 +3134,7 @@ find_busiest_group(struct sched_domain *
    max_cpu_load = 0;
    min_cpu_load = ~0UL;

    - for_each_cpu_mask_nr(i, group->cpumask) {
    + for_each_cpu(i, &group->cpumask) {
    struct rq *rq;

    if (!cpu_isset(i, *cpus))
    @@ -3413,7 +3413,7 @@ find_busiest_queue(struct sched_group *g
    unsigned long max_load = 0;
    int i;

    - for_each_cpu_mask_nr(i, group->cpumask) {
    + for_each_cpu(i, &group->cpumask) {
    unsigned long wl;

    if (!cpu_isset(i, *cpus))
    @@ -3955,7 +3955,7 @@ static void run_rebalance_domains(struct
    int balance_cpu;

    cpu_clear(this_cpu, cpus);
    - for_each_cpu_mask_nr(balance_cpu, cpus) {
    + for_each_cpu(balance_cpu, &cpus) {
    /*
    * If this cpu gets work to do, stop the load balancing
    * work being done for other cpus. Next load
    @@ -6935,7 +6935,7 @@ init_sched_build_groups(const cpumask_t

    cpus_clear(*covered);

    - for_each_cpu_mask_nr(i, *span) {
    + for_each_cpu(i, span) {
    struct sched_group *sg;
    int group = group_fn(i, cpu_map, &sg, tmpmask);
    int j;
    @@ -6946,7 +6946,7 @@ init_sched_build_groups(const cpumask_t
    cpus_clear(sg->cpumask);
    sg->__cpu_power = 0;

    - for_each_cpu_mask_nr(j, *span) {
    + for_each_cpu(j, span) {
    if (group_fn(j, cpu_map, NULL, tmpmask) != group)
    continue;

    @@ -7146,7 +7146,7 @@ static void init_numa_sched_groups_power
    if (!sg)
    return;
    do {
    - for_each_cpu_mask_nr(j, sg->cpumask) {
    + for_each_cpu(j, &sg->cpumask) {
    struct sched_domain *sd;

    sd = &per_cpu(phys_domains, j);
    @@ -7171,7 +7171,7 @@ static void free_sched_groups(const cpum
    {
    int cpu, i;

    - for_each_cpu_mask_nr(cpu, *cpu_map) {
    + for_each_cpu(cpu, cpu_map) {
    struct sched_group **sched_group_nodes
    = sched_group_nodes_bycpu[cpu];

    @@ -7418,7 +7418,7 @@ static int __build_sched_domains(const c
    /*
    * Set up domains for cpus specified by the cpu_map.
    */
    - for_each_cpu_mask_nr(i, *cpu_map) {
    + for_each_cpu(i, cpu_map) {
    struct sched_domain *sd = NULL, *p;
    SCHED_CPUMASK_VAR(nodemask, allmasks);

    @@ -7485,7 +7485,7 @@ static int __build_sched_domains(const c

    #ifdef CONFIG_SCHED_SMT
    /* Set up CPU (sibling) groups */
    - for_each_cpu_mask_nr(i, *cpu_map) {
    + for_each_cpu(i, cpu_map) {
    SCHED_CPUMASK_VAR(this_sibling_map, allmasks);
    SCHED_CPUMASK_VAR(send_covered, allmasks);

    @@ -7502,7 +7502,7 @@ static int __build_sched_domains(const c

    #ifdef CONFIG_SCHED_MC
    /* Set up multi-core groups */
    - for_each_cpu_mask_nr(i, *cpu_map) {
    + for_each_cpu(i, cpu_map) {
    SCHED_CPUMASK_VAR(this_core_map, allmasks);
    SCHED_CPUMASK_VAR(send_covered, allmasks);

    @@ -7569,7 +7569,7 @@ static int __build_sched_domains(const c
    goto error;
    }
    sched_group_nodes[i] = sg;
    - for_each_cpu_mask_nr(j, *nodemask) {
    + for_each_cpu(j, nodemask) {
    struct sched_domain *sd;

    sd = &per_cpu(node_domains, j);
    @@ -7615,21 +7615,21 @@ static int __build_sched_domains(const c

    /* Calculate CPU power for physical packages and nodes */
    #ifdef CONFIG_SCHED_SMT
    - for_each_cpu_mask_nr(i, *cpu_map) {
    + for_each_cpu(i, cpu_map) {
    struct sched_domain *sd = &per_cpu(cpu_domains, i);

    init_sched_groups_power(i, sd);
    }
    #endif
    #ifdef CONFIG_SCHED_MC
    - for_each_cpu_mask_nr(i, *cpu_map) {
    + for_each_cpu(i, cpu_map) {
    struct sched_domain *sd = &per_cpu(core_domains, i);

    init_sched_groups_power(i, sd);
    }
    #endif

    - for_each_cpu_mask_nr(i, *cpu_map) {
    + for_each_cpu(i, cpu_map) {
    struct sched_domain *sd = &per_cpu(phys_domains, i);

    init_sched_groups_power(i, sd);
    @@ -7649,7 +7649,7 @@ static int __build_sched_domains(const c
    #endif

    /* Attach the domains */
    - for_each_cpu_mask_nr(i, *cpu_map) {
    + for_each_cpu(i, cpu_map) {
    struct sched_domain *sd;
    #ifdef CONFIG_SCHED_SMT
    sd = &per_cpu(cpu_domains, i);
    @@ -7732,7 +7732,7 @@ static void detach_destroy_domains(const

    unregister_sched_domain_sysctl();

    - for_each_cpu_mask_nr(i, *cpu_map)
    + for_each_cpu(i, cpu_map)
    cpu_attach_domain(NULL, &def_root_domain, i);
    synchronize_sched();
    arch_destroy_sched_domains(cpu_map, &tmpmask);
    --- linux-2.6.28.orig/kernel/workqueue.c
    +++ linux-2.6.28/kernel/workqueue.c
    @@ -415,7 +415,7 @@ void flush_workqueue(struct workqueue_st
    might_sleep();
    lock_map_acquire(&wq->lockdep_map);
    lock_map_release(&wq->lockdep_map);
    - for_each_cpu_mask_nr(cpu, *cpu_map)
    + for_each_cpu(cpu, cpu_map)
    flush_cpu_workqueue(per_cpu_ptr(wq->cpu_wq, cpu));
    }
    EXPORT_SYMBOL_GPL(flush_workqueue);
    @@ -546,7 +546,7 @@ static void wait_on_work(struct work_str
    wq = cwq->wq;
    cpu_map = wq_cpu_map(wq);

    - for_each_cpu_mask_nr(cpu, *cpu_map)
    + for_each_cpu(cpu, cpu_map)
    wait_on_cpu_work(per_cpu_ptr(wq->cpu_wq, cpu), work);
    }

    @@ -906,7 +906,7 @@ void destroy_workqueue(struct workqueue_
    list_del(&wq->list);
    spin_unlock(&workqueue_lock);

    - for_each_cpu_mask_nr(cpu, *cpu_map)
    + for_each_cpu(cpu, cpu_map)
    cleanup_workqueue_thread(per_cpu_ptr(wq->cpu_wq, cpu));
    cpu_maps_update_done();

    --- linux-2.6.28.orig/lib/cpumask.c
    +++ linux-2.6.28/lib/cpumask.c
    @@ -28,7 +28,7 @@ int __any_online_cpu(const cpumask_t *ma
    {
    int cpu;

    - for_each_cpu_mask(cpu, *mask) {
    + for_each_cpu(cpu, mask) {
    if (cpu_online(cpu))
    break;
    }
    --- linux-2.6.28.orig/mm/allocpercpu.c
    +++ linux-2.6.28/mm/allocpercpu.c
    @@ -34,7 +34,7 @@ static void percpu_depopulate(void *__pd
    static void __percpu_depopulate_mask(void *__pdata, cpumask_t *mask)
    {
    int cpu;
    - for_each_cpu_mask_nr(cpu, *mask)
    + for_each_cpu(cpu, mask)
    percpu_depopulate(__pdata, cpu);
    }

    @@ -86,7 +86,7 @@ static int __percpu_populate_mask(void *
    int cpu;

    cpus_clear(populated);
    - for_each_cpu_mask_nr(cpu, *mask)
    + for_each_cpu(cpu, mask)
    if (unlikely(!percpu_populate(__pdata, size, gfp, cpu))) {
    __percpu_depopulate_mask(__pdata, &populated);
    return -ENOMEM;
    --- linux-2.6.28.orig/mm/vmstat.c
    +++ linux-2.6.28/mm/vmstat.c
    @@ -20,14 +20,14 @@
    DEFINE_PER_CPU(struct vm_event_state, vm_event_states) = {{0}};
    EXPORT_PER_CPU_SYMBOL(vm_event_states);

    -static void sum_vm_events(unsigned long *ret, cpumask_t *cpumask)
    +static void sum_vm_events(unsigned long *ret, const cpumask_t *cpumask)
    {
    int cpu;
    int i;

    memset(ret, 0, NR_VM_EVENT_ITEMS * sizeof(unsigned long));

    - for_each_cpu_mask_nr(cpu, *cpumask) {
    + for_each_cpu(cpu, cpumask) {
    struct vm_event_state *this = &per_cpu(vm_event_states, cpu);

    for (i = 0; i < NR_VM_EVENT_ITEMS; i++)

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  10. [PATCH 09/35] cpumask: add cpumask_copy()

    Since cpumasks are to become pointers to undefined structs, we need to
    replace assignments. Also, dynamically allocated ones will eventually
    be nr_cpu_ids bits (<= NR_CPUS), so assignment is a definite no-no.

    Signed-off-by: Mike Travis
    Signed-off-by: Rusty Russell
    ---
    include/linux/cpumask.h | 8 ++++++++
    1 file changed, 8 insertions(+)

    --- linux-2.6.28.orig/include/linux/cpumask.h
    +++ linux-2.6.28/include/linux/cpumask.h
    @@ -64,6 +64,8 @@
    * int next_cpu(cpu, mask) Next cpu past 'cpu', or NR_CPUS
    * int next_cpu_nr(cpu, mask) Next cpu past 'cpu', or nr_cpu_ids
    *
    + * void cpumask_copy(dmask, smask) dmask = smask
    + *
    * size_t cpumask_size() Length of cpumask in bytes.
    * cpumask_t cpumask_of_cpu(cpu) Return cpumask with bit 'cpu' set
    * (can be used as an lvalue)
    @@ -350,6 +352,12 @@ static inline void cpumask_fold(struct c
    bitmap_fold(dstp->bits, origp->bits, sz, NR_CPUS);
    }

    +static inline void cpumask_copy(struct cpumask *dstp,
    + const struct cpumask *srcp)
    +{
    + bitmap_copy(cpumask_bits(dstp), cpumask_bits(srcp), NR_CPUS);
    +}
    +
    /*
    * Special-case data structure for "single bit set only" constant CPU masks.
    *

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  11. Re: [PATCH 04/35] cpumask: centralize cpu_online_map and cpu_possible_map - resubmit

    cpumask: centralize cpu_online_map and cpu_possible_map
    From: Rusty Russell

    Each SMP arch defines these themselves. Move them to a central
    location.

    Two twists:
    1) Some archs set possible_map to all 1, so we add a
    CONFIG_INIT_ALL_POSSIBLE for this rather than break them.

    2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'.
    Those archs simply have phys_cpu_present_map replaced everywhere.

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis

    ---
    arch/alpha/kernel/smp.c | 5 -----
    arch/arm/kernel/smp.c | 10 ----------
    arch/cris/arch-v32/kernel/smp.c | 4 ----
    arch/ia64/kernel/smpboot.c | 6 ------
    arch/m32r/Kconfig | 1 +
    arch/m32r/kernel/smpboot.c | 6 ------
    arch/mips/include/asm/smp.h | 3 ---
    arch/mips/kernel/smp-cmp.c | 2 +-
    arch/mips/kernel/smp-mt.c | 2 +-
    arch/mips/kernel/smp.c | 7 +------
    arch/mips/kernel/smtc.c | 6 +++---
    arch/mips/pmc-sierra/yosemite/smp.c | 6 +++---
    arch/mips/sgi-ip27/ip27-smp.c | 2 +-
    arch/mips/sibyte/bcm1480/smp.c | 8 ++++----
    arch/mips/sibyte/sb1250/smp.c | 8 ++++----
    arch/parisc/Kconfig | 1 +
    arch/parisc/kernel/smp.c | 15 ---------------
    arch/powerpc/kernel/smp.c | 4 ----
    arch/s390/Kconfig | 1 +
    arch/s390/kernel/smp.c | 6 ------
    arch/sh/kernel/smp.c | 6 ------
    arch/sparc/include/asm/smp_32.h | 2 --
    arch/sparc/kernel/smp.c | 6 ++----
    arch/sparc/kernel/sparc_ksyms.c | 4 ----
    arch/sparc64/kernel/smp.c | 4 ----
    arch/um/kernel/smp.c | 7 -------
    arch/x86/kernel/smpboot.c | 6 ------
    arch/x86/mach-voyager/voyager_smp.c | 7 -------
    init/Kconfig | 9 +++++++++
    kernel/cpu.c | 11 ++++++-----
    30 files changed, 38 insertions(+), 127 deletions(-)

    --- linux-2.6.28.orig/arch/alpha/kernel/smp.c
    +++ linux-2.6.28/arch/alpha/kernel/smp.c
    @@ -70,11 +70,6 @@ enum ipi_message_type {
    /* Set to a secondary's cpuid when it comes online. */
    static int smp_secondary_alive __devinitdata = 0;

    -/* Which cpus ids came online. */
    -cpumask_t cpu_online_map;
    -
    -EXPORT_SYMBOL(cpu_online_map);
    -
    int smp_num_probed; /* Internal processor count */
    int smp_num_cpus = 1; /* Number that came online. */
    EXPORT_SYMBOL(smp_num_cpus);
    --- linux-2.6.28.orig/arch/arm/kernel/smp.c
    +++ linux-2.6.28/arch/arm/kernel/smp.c
    @@ -34,16 +34,6 @@
    #include

    /*
    - * bitmask of present and online CPUs.
    - * The present bitmask indicates that the CPU is physically present.
    - * The online bitmask indicates that the CPU is up and running.
    - */
    -cpumask_t cpu_possible_map;
    -EXPORT_SYMBOL(cpu_possible_map);
    -cpumask_t cpu_online_map;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    -/*
    * as from 2.5, kernels no longer have an init_tasks structure
    * so we need some other way of telling a new secondary core
    * where to place its SVC stack
    --- linux-2.6.28.orig/arch/cris/arch-v32/kernel/smp.c
    +++ linux-2.6.28/arch/cris/arch-v32/kernel/smp.c
    @@ -29,11 +29,7 @@
    spinlock_t cris_atomic_locks[] = { [0 ... LOCK_COUNT - 1] = SPIN_LOCK_UNLOCKED};

    /* CPU masks */
    -cpumask_t cpu_online_map = CPU_MASK_NONE;
    -EXPORT_SYMBOL(cpu_online_map);
    cpumask_t phys_cpu_present_map = CPU_MASK_NONE;
    -cpumask_t cpu_possible_map;
    -EXPORT_SYMBOL(cpu_possible_map);
    EXPORT_SYMBOL(phys_cpu_present_map);

    /* Variables used during SMP boot */
    --- linux-2.6.28.orig/arch/ia64/kernel/smpboot.c
    +++ linux-2.6.28/arch/ia64/kernel/smpboot.c
    @@ -131,12 +131,6 @@ struct task_struct *task_for_booting_cpu
    */
    DEFINE_PER_CPU(int, cpu_state);

    -/* Bitmasks of currently online, and possible CPUs */
    -cpumask_t cpu_online_map;
    -EXPORT_SYMBOL(cpu_online_map);
    -cpumask_t cpu_possible_map = CPU_MASK_NONE;
    -EXPORT_SYMBOL(cpu_possible_map);
    -
    cpumask_t cpu_core_map[NR_CPUS] __cacheline_aligned;
    EXPORT_SYMBOL(cpu_core_map);
    DEFINE_PER_CPU_SHARED_ALIGNED(cpumask_t, cpu_sibling_map);
    --- linux-2.6.28.orig/arch/m32r/Kconfig
    +++ linux-2.6.28/arch/m32r/Kconfig
    @@ -10,6 +10,7 @@ config M32R
    default y
    select HAVE_IDE
    select HAVE_OPROFILE
    + select INIT_ALL_POSSIBLE

    config SBUS
    bool
    --- linux-2.6.28.orig/arch/m32r/kernel/smpboot.c
    +++ linux-2.6.28/arch/m32r/kernel/smpboot.c
    @@ -73,17 +73,11 @@ static unsigned int bsp_phys_id = -1;
    /* Bitmask of physically existing CPUs */
    physid_mask_t phys_cpu_present_map;

    -/* Bitmask of currently online CPUs */
    -cpumask_t cpu_online_map;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    cpumask_t cpu_bootout_map;
    cpumask_t cpu_bootin_map;
    static cpumask_t cpu_callin_map;
    cpumask_t cpu_callout_map;
    EXPORT_SYMBOL(cpu_callout_map);
    -cpumask_t cpu_possible_map = CPU_MASK_ALL;
    -EXPORT_SYMBOL(cpu_possible_map);

    /* Per CPU bogomips and other parameters */
    struct cpuinfo_m32r cpu_data[NR_CPUS] __cacheline_aligned;
    --- linux-2.6.28.orig/arch/mips/include/asm/smp.h
    +++ linux-2.6.28/arch/mips/include/asm/smp.h
    @@ -38,9 +38,6 @@ extern int __cpu_logical_map[NR_CPUS];
    #define SMP_RESCHEDULE_YOURSELF 0x1 /* XXX braindead */
    #define SMP_CALL_FUNCTION 0x2

    -extern cpumask_t phys_cpu_present_map;
    -#define cpu_possible_map phys_cpu_present_map
    -
    extern void asmlinkage smp_bootstrap(void);

    /*
    --- linux-2.6.28.orig/arch/mips/kernel/smp-cmp.c
    +++ linux-2.6.28/arch/mips/kernel/smp-cmp.c
    @@ -226,7 +226,7 @@ void __init cmp_smp_setup(void)

    for (i = 1; i < NR_CPUS; i++) {
    if (amon_cpu_avail(i)) {
    - cpu_set(i, phys_cpu_present_map);
    + cpu_set(i, cpu_possible_map);
    __cpu_number_map[i] = ++ncpu;
    __cpu_logical_map[ncpu] = i;
    }
    --- linux-2.6.28.orig/arch/mips/kernel/smp-mt.c
    +++ linux-2.6.28/arch/mips/kernel/smp-mt.c
    @@ -70,7 +70,7 @@ static unsigned int __init smvp_vpe_init
    write_vpe_c0_vpeconf0(tmp);

    /* Record this as available CPU */
    - cpu_set(tc, phys_cpu_present_map);
    + cpu_set(tc, cpu_possible_map);
    __cpu_number_map[tc] = ++ncpu;
    __cpu_logical_map[ncpu] = tc;
    }
    --- linux-2.6.28.orig/arch/mips/kernel/smp.c
    +++ linux-2.6.28/arch/mips/kernel/smp.c
    @@ -44,15 +44,10 @@
    #include
    #endif /* CONFIG_MIPS_MT_SMTC */

    -cpumask_t phys_cpu_present_map; /* Bitmask of available CPUs */
    volatile cpumask_t cpu_callin_map; /* Bitmask of started secondaries */
    -cpumask_t cpu_online_map; /* Bitmask of currently online CPUs */
    int __cpu_number_map[NR_CPUS]; /* Map physical to logical */
    int __cpu_logical_map[NR_CPUS]; /* Map logical to physical */

    -EXPORT_SYMBOL(phys_cpu_present_map);
    -EXPORT_SYMBOL(cpu_online_map);
    -
    extern void cpu_idle(void);

    /* Number of TCs (or siblings in Intel speak) per CPU core */
    @@ -199,7 +194,7 @@ void __devinit smp_prepare_boot_cpu(void
    */
    __cpu_number_map[0] = 0;
    __cpu_logical_map[0] = 0;
    - cpu_set(0, phys_cpu_present_map);
    + cpu_set(0, cpu_possible_map);
    cpu_set(0, cpu_online_map);
    cpu_set(0, cpu_callin_map);
    }
    --- linux-2.6.28.orig/arch/mips/kernel/smtc.c
    +++ linux-2.6.28/arch/mips/kernel/smtc.c
    @@ -290,7 +290,7 @@ static void smtc_configure_tlb(void)
    * possibly leave some TCs/VPEs as "slave" processors.
    *
    * Use c0_MVPConf0 to find out how many TCs are available, setting up
    - * phys_cpu_present_map and the logical/physical mappings.
    + * cpu_possible_map and the logical/physical mappings.
    */

    int __init smtc_build_cpu_map(int start_cpu_slot)
    @@ -304,7 +304,7 @@ int __init smtc_build_cpu_map(int start_
    */
    ntcs = ((read_c0_mvpconf0() & MVPCONF0_PTC) >> MVPCONF0_PTC_SHIFT) + 1;
    for (i=start_cpu_slot; i - cpu_set(i, phys_cpu_present_map);
    + cpu_set(i, cpu_possible_map);
    __cpu_number_map[i] = i;
    __cpu_logical_map[i] = i;
    }
    @@ -521,7 +521,7 @@ void smtc_prepare_cpus(int cpus)
    * Pull any physically present but unused TCs out of circulation.
    */
    while (tc < (((val & MVPCONF0_PTC) >> MVPCONF0_PTC_SHIFT) + 1)) {
    - cpu_clear(tc, phys_cpu_present_map);
    + cpu_clear(tc, cpu_possible_map);
    cpu_clear(tc, cpu_present_map);
    tc++;
    }
    --- linux-2.6.28.orig/arch/mips/pmc-sierra/yosemite/smp.c
    +++ linux-2.6.28/arch/mips/pmc-sierra/yosemite/smp.c
    @@ -141,7 +141,7 @@ static void __cpuinit yos_boot_secondary
    }

    /*
    - * Detect available CPUs, populate phys_cpu_present_map before smp_init
    + * Detect available CPUs, populate cpu_possible_map before smp_init
    *
    * We don't want to start the secondary CPU yet nor do we have a nice probing
    * feature in PMON so we just assume presence of the secondary core.
    @@ -150,10 +150,10 @@ static void __init yos_smp_setup(void)
    {
    int i;

    - cpus_clear(phys_cpu_present_map);
    + cpus_clear(cpu_possible_map);

    for (i = 0; i < 2; i++) {
    - cpu_set(i, phys_cpu_present_map);
    + cpu_set(i, cpu_possible_map);
    __cpu_number_map[i] = i;
    __cpu_logical_map[i] = i;
    }
    --- linux-2.6.28.orig/arch/mips/sgi-ip27/ip27-smp.c
    +++ linux-2.6.28/arch/mips/sgi-ip27/ip27-smp.c
    @@ -76,7 +76,7 @@ static int do_cpumask(cnodeid_t cnode, n
    /* Only let it join in if it's marked enabled */
    if ((acpu->cpu_info.flags & KLINFO_ENABLE) &&
    (tot_cpus_found != NR_CPUS)) {
    - cpu_set(cpuid, phys_cpu_present_map);
    + cpu_set(cpuid, cpu_possible_map);
    alloc_cpupda(cpuid, tot_cpus_found);
    cpus_found++;
    tot_cpus_found++;
    --- linux-2.6.28.orig/arch/mips/sibyte/bcm1480/smp.c
    +++ linux-2.6.28/arch/mips/sibyte/bcm1480/smp.c
    @@ -136,7 +136,7 @@ static void __cpuinit bcm1480_boot_secon

    /*
    * Use CFE to find out how many CPUs are available, setting up
    - * phys_cpu_present_map and the logical/physical mappings.
    + * cpu_possible_map and the logical/physical mappings.
    * XXXKW will the boot CPU ever not be physical 0?
    *
    * Common setup before any secondaries are started
    @@ -145,14 +145,14 @@ static void __init bcm1480_smp_setup(voi
    {
    int i, num;

    - cpus_clear(phys_cpu_present_map);
    - cpu_set(0, phys_cpu_present_map);
    + cpus_clear(cpu_possible_map);
    + cpu_set(0, cpu_possible_map);
    __cpu_number_map[0] = 0;
    __cpu_logical_map[0] = 0;

    for (i = 1, num = 0; i < NR_CPUS; i++) {
    if (cfe_cpu_stop(i) == 0) {
    - cpu_set(i, phys_cpu_present_map);
    + cpu_set(i, cpu_possible_map);
    __cpu_number_map[i] = ++num;
    __cpu_logical_map[num] = i;
    }
    --- linux-2.6.28.orig/arch/mips/sibyte/sb1250/smp.c
    +++ linux-2.6.28/arch/mips/sibyte/sb1250/smp.c
    @@ -124,7 +124,7 @@ static void __cpuinit sb1250_boot_second

    /*
    * Use CFE to find out how many CPUs are available, setting up
    - * phys_cpu_present_map and the logical/physical mappings.
    + * cpu_possible_map and the logical/physical mappings.
    * XXXKW will the boot CPU ever not be physical 0?
    *
    * Common setup before any secondaries are started
    @@ -133,14 +133,14 @@ static void __init sb1250_smp_setup(void
    {
    int i, num;

    - cpus_clear(phys_cpu_present_map);
    - cpu_set(0, phys_cpu_present_map);
    + cpus_clear(cpu_possible_map);
    + cpu_set(0, cpu_possible_map);
    __cpu_number_map[0] = 0;
    __cpu_logical_map[0] = 0;

    for (i = 1, num = 0; i < NR_CPUS; i++) {
    if (cfe_cpu_stop(i) == 0) {
    - cpu_set(i, phys_cpu_present_map);
    + cpu_set(i, cpu_possible_map);
    __cpu_number_map[i] = ++num;
    __cpu_logical_map[num] = i;
    }
    --- linux-2.6.28.orig/arch/parisc/Kconfig
    +++ linux-2.6.28/arch/parisc/Kconfig
    @@ -11,6 +11,7 @@ config PARISC
    select HAVE_OPROFILE
    select RTC_CLASS
    select RTC_DRV_PARISC
    + select INIT_ALL_POSSIBLE
    help
    The PA-RISC microprocessor is designed by Hewlett-Packard and used
    in many of their workstations & servers (HP9000 700 and 800 series,
    --- linux-2.6.28.orig/arch/parisc/kernel/smp.c
    +++ linux-2.6.28/arch/parisc/kernel/smp.c
    @@ -67,21 +67,6 @@ static volatile int cpu_now_booting __re

    static int parisc_max_cpus __read_mostly = 1;

    -/* online cpus are ones that we've managed to bring up completely
    - * possible cpus are all valid cpu
    - * present cpus are all detected cpu
    - *
    - * On startup we bring up the "possible" cpus. Since we discover
    - * CPUs later, we add them as hotplug, so the possible cpu mask is
    - * empty in the beginning.
    - */
    -
    -cpumask_t cpu_online_map __read_mostly = CPU_MASK_NONE; /* Bitmap of online CPUs */
    -cpumask_t cpu_possible_map __read_mostly = CPU_MASK_ALL; /* Bitmap of Present CPUs */
    -
    -EXPORT_SYMBOL(cpu_online_map);
    -EXPORT_SYMBOL(cpu_possible_map);
    -
    DEFINE_PER_CPU(spinlock_t, ipi_lock) = SPIN_LOCK_UNLOCKED;

    enum ipi_message_type {
    --- linux-2.6.28.orig/arch/powerpc/kernel/smp.c
    +++ linux-2.6.28/arch/powerpc/kernel/smp.c
    @@ -60,13 +60,9 @@
    int smp_hw_index[NR_CPUS];
    struct thread_info *secondary_ti;

    -cpumask_t cpu_possible_map = CPU_MASK_NONE;
    -cpumask_t cpu_online_map = CPU_MASK_NONE;
    DEFINE_PER_CPU(cpumask_t, cpu_sibling_map) = CPU_MASK_NONE;
    DEFINE_PER_CPU(cpumask_t, cpu_core_map) = CPU_MASK_NONE;

    -EXPORT_SYMBOL(cpu_online_map);
    -EXPORT_SYMBOL(cpu_possible_map);
    EXPORT_PER_CPU_SYMBOL(cpu_sibling_map);
    EXPORT_PER_CPU_SYMBOL(cpu_core_map);

    --- linux-2.6.28.orig/arch/s390/Kconfig
    +++ linux-2.6.28/arch/s390/Kconfig
    @@ -75,6 +75,7 @@ config S390
    select HAVE_KRETPROBES
    select HAVE_KVM if 64BIT
    select HAVE_ARCH_TRACEHOOK
    + select INIT_ALL_POSSIBLE

    source "init/Kconfig"

    --- linux-2.6.28.orig/arch/s390/kernel/smp.c
    +++ linux-2.6.28/arch/s390/kernel/smp.c
    @@ -52,12 +52,6 @@
    struct _lowcore *lowcore_ptr[NR_CPUS];
    EXPORT_SYMBOL(lowcore_ptr);

    -cpumask_t cpu_online_map = CPU_MASK_NONE;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    -cpumask_t cpu_possible_map = CPU_MASK_ALL;
    -EXPORT_SYMBOL(cpu_possible_map);
    -
    static struct task_struct *current_set[NR_CPUS];

    static u8 smp_cpu_type;
    --- linux-2.6.28.orig/arch/sh/kernel/smp.c
    +++ linux-2.6.28/arch/sh/kernel/smp.c
    @@ -30,12 +30,6 @@
    int __cpu_number_map[NR_CPUS]; /* Map physical to logical */
    int __cpu_logical_map[NR_CPUS]; /* Map logical to physical */

    -cpumask_t cpu_possible_map;
    -EXPORT_SYMBOL(cpu_possible_map);
    -
    -cpumask_t cpu_online_map;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    static inline void __init smp_store_cpu_info(unsigned int cpu)
    {
    struct sh_cpuinfo *c = cpu_data + cpu;
    --- linux-2.6.28.orig/arch/sparc/include/asm/smp_32.h
    +++ linux-2.6.28/arch/sparc/include/asm/smp_32.h
    @@ -29,8 +29,6 @@
    */

    extern unsigned char boot_cpu_id;
    -extern cpumask_t phys_cpu_present_map;
    -#define cpu_possible_map phys_cpu_present_map

    typedef void (*smpfunc_t)(unsigned long, unsigned long, unsigned long,
    unsigned long, unsigned long);
    --- linux-2.6.28.orig/arch/sparc/kernel/smp.c
    +++ linux-2.6.28/arch/sparc/kernel/smp.c
    @@ -39,8 +39,6 @@ volatile unsigned long cpu_callin_map[NR
    unsigned char boot_cpu_id = 0;
    unsigned char boot_cpu_id4 = 0; /* boot_cpu_id << 2 */

    -cpumask_t cpu_online_map = CPU_MASK_NONE;
    -cpumask_t phys_cpu_present_map = CPU_MASK_NONE;
    cpumask_t smp_commenced_mask = CPU_MASK_NONE;

    /* The only guaranteed locking primitive available on all Sparc
    @@ -334,7 +332,7 @@ void __init smp_setup_cpu_possible_map(v
    instance = 0;
    while (!cpu_find_by_instance(instance, NULL, &mid)) {
    if (mid < NR_CPUS) {
    - cpu_set(mid, phys_cpu_present_map);
    + cpu_set(mid, cpu_possible_map);
    cpu_set(mid, cpu_present_map);
    }
    instance++;
    @@ -354,7 +352,7 @@ void __init smp_prepare_boot_cpu(void)

    current_thread_info()->cpu = cpuid;
    cpu_set(cpuid, cpu_online_map);
    - cpu_set(cpuid, phys_cpu_present_map);
    + cpu_set(cpuid, cpu_possible_map);
    }

    int __cpuinit __cpu_up(unsigned int cpu)
    --- linux-2.6.28.orig/arch/sparc/kernel/sparc_ksyms.c
    +++ linux-2.6.28/arch/sparc/kernel/sparc_ksyms.c
    @@ -113,10 +113,6 @@ EXPORT_PER_CPU_SYMBOL(__cpu_data);
    #ifdef CONFIG_SMP
    /* IRQ implementation. */
    EXPORT_SYMBOL(synchronize_irq);
    -
    -/* CPU online map and active count. */
    -EXPORT_SYMBOL(cpu_online_map);
    -EXPORT_SYMBOL(phys_cpu_present_map);
    #endif

    EXPORT_SYMBOL(__udelay);
    --- linux-2.6.28.orig/arch/sparc64/kernel/smp.c
    +++ linux-2.6.28/arch/sparc64/kernel/smp.c
    @@ -49,14 +49,10 @@

    int sparc64_multi_core __read_mostly;

    -cpumask_t cpu_possible_map __read_mostly = CPU_MASK_NONE;
    -cpumask_t cpu_online_map __read_mostly = CPU_MASK_NONE;
    DEFINE_PER_CPU(cpumask_t, cpu_sibling_map) = CPU_MASK_NONE;
    cpumask_t cpu_core_map[NR_CPUS] __read_mostly =
    { [0 ... NR_CPUS-1] = CPU_MASK_NONE };

    -EXPORT_SYMBOL(cpu_possible_map);
    -EXPORT_SYMBOL(cpu_online_map);
    EXPORT_PER_CPU_SYMBOL(cpu_sibling_map);
    EXPORT_SYMBOL(cpu_core_map);

    --- linux-2.6.28.orig/arch/um/kernel/smp.c
    +++ linux-2.6.28/arch/um/kernel/smp.c
    @@ -25,13 +25,6 @@ DEFINE_PER_CPU(struct mmu_gather, mmu_ga
    #include "irq_user.h"
    #include "os.h"

    -/* CPU online map, set by smp_boot_cpus */
    -cpumask_t cpu_online_map = CPU_MASK_NONE;
    -cpumask_t cpu_possible_map = CPU_MASK_NONE;
    -
    -EXPORT_SYMBOL(cpu_online_map);
    -EXPORT_SYMBOL(cpu_possible_map);
    -
    /* Per CPU bogomips and other parameters
    * The only piece used here is the ipi pipe, which is set before SMP is
    * started and never changed.
    --- linux-2.6.28.orig/arch/x86/kernel/smpboot.c
    +++ linux-2.6.28/arch/x86/kernel/smpboot.c
    @@ -101,14 +101,8 @@ EXPORT_SYMBOL(smp_num_siblings);
    /* Last level cache ID of each logical CPU */
    DEFINE_PER_CPU(u16, cpu_llc_id) = BAD_APICID;

    -/* bitmap of online cpus */
    -cpumask_t cpu_online_map __read_mostly;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    cpumask_t cpu_callin_map;
    cpumask_t cpu_callout_map;
    -cpumask_t cpu_possible_map;
    -EXPORT_SYMBOL(cpu_possible_map);

    /* representing HT siblings of each logical CPU */
    DEFINE_PER_CPU(cpumask_t, cpu_sibling_map);
    --- linux-2.6.28.orig/arch/x86/mach-voyager/voyager_smp.c
    +++ linux-2.6.28/arch/x86/mach-voyager/voyager_smp.c
    @@ -62,11 +62,6 @@ static int voyager_extended_cpus = 1;
    /* Used for the invalidate map that's also checked in the spinlock */
    static volatile unsigned long smp_invalidate_needed;

    -/* Bitmask of currently online CPUs - used by setup.c for
    - /proc/cpuinfo, visible externally but still physical */
    -cpumask_t cpu_online_map = CPU_MASK_NONE;
    -EXPORT_SYMBOL(cpu_online_map);
    -
    /* Bitmask of CPUs present in the system - exported by i386_syms.c, used
    * by scheduler but indexed physically */
    cpumask_t phys_cpu_present_map = CPU_MASK_NONE;
    @@ -216,8 +211,6 @@ static cpumask_t smp_commenced_mask = CP
    /* This is for the new dynamic CPU boot code */
    cpumask_t cpu_callin_map = CPU_MASK_NONE;
    cpumask_t cpu_callout_map = CPU_MASK_NONE;
    -cpumask_t cpu_possible_map = CPU_MASK_NONE;
    -EXPORT_SYMBOL(cpu_possible_map);

    /* The per processor IRQ masks (these are usually kept in sync) */
    static __u16 vic_irq_mask[NR_CPUS] __cacheline_aligned;
    --- linux-2.6.28.orig/init/Kconfig
    +++ linux-2.6.28/init/Kconfig
    @@ -911,6 +911,15 @@ config KMOD

    endif # MODULES

    +config INIT_ALL_POSSIBLE
    + bool
    + help
    + Back when each arch used to define their own cpu_online_map and
    + cpu_possible_map, some of them chose to initialize cpu_possible_map
    + with all 1s, and others with all 0s. When they were centralised,
    + it was better to provide this option than to break all the archs
    + and have several arch maintainers persuing me down dark alleys.
    +
    config STOP_MACHINE
    bool
    default y
    --- linux-2.6.28.orig/kernel/cpu.c
    +++ linux-2.6.28/kernel/cpu.c
    @@ -24,19 +24,20 @@
    cpumask_t cpu_present_map __read_mostly;
    EXPORT_SYMBOL(cpu_present_map);

    -#ifndef CONFIG_SMP
    -
    /*
    * Represents all cpu's that are currently online.
    */
    -cpumask_t cpu_online_map __read_mostly = CPU_MASK_ALL;
    +cpumask_t cpu_online_map __read_mostly;
    EXPORT_SYMBOL(cpu_online_map);

    +#ifdef CONFIG_INIT_ALL_POSSIBLE
    cpumask_t cpu_possible_map __read_mostly = CPU_MASK_ALL;
    +#else
    +cpumask_t cpu_possible_map __read_mostly;
    +#endif
    EXPORT_SYMBOL(cpu_possible_map);

    -#else /* CONFIG_SMP */
    -
    +#ifdef CONFIG_SMP
    /* Serializes the updates to cpu_online_map, cpu_present_map */
    static DEFINE_MUTEX(cpu_add_remove_lock);


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  12. [PATCH 24/35] cpumask: Deprecate CPUMASK_ALLOC etc in favor of cpumask_var_t. From: Rusty Russell <rusty@rustcorp.com.au>

    Remove CPUMASK_ALLOC() in favor of alloc_cpumask_var().

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    include/linux/cpumask.h | 46 ++++++++--------------------------------------
    1 file changed, 8 insertions(+), 38 deletions(-)

    --- linux-2.6.28.orig/include/linux/cpumask.h
    +++ linux-2.6.28/include/linux/cpumask.h
    @@ -57,35 +57,6 @@
    * CPU_MASK_NONE Initializer - no bits set
    * unsigned long *cpumask_bits(mask) Array of unsigned long's in mask
    *
    - * CPUMASK_ALLOC kmalloc's a structure that is a composite of many cpumask_t
    - * variables, and CPUMASK_PTR provides pointers to each field.
    - *
    - * The structure should be defined something like this:
    - * struct my_cpumasks {
    - * cpumask_t mask1;
    - * cpumask_t mask2;
    - * };
    - *
    - * Usage is then:
    - * CPUMASK_ALLOC(my_cpumasks);
    - * CPUMASK_PTR(mask1, my_cpumasks);
    - * CPUMASK_PTR(mask2, my_cpumasks);
    - *
    - * --- DO NOT reference cpumask_t pointers until this check ---
    - * if (my_cpumasks == NULL)
    - * "kmalloc failed"...
    - *
    - * References are now pointers to the cpumask_t variables (*mask1, ...)
    - *
    - *if NR_CPUS > BITS_PER_LONG
    - * CPUMASK_ALLOC(m) Declares and allocates struct m *m =
    - * kmalloc(sizeof(*m), GFP_KERNEL)
    - * CPUMASK_FREE(m) Macro for kfree(m)
    - *else
    - * CPUMASK_ALLOC(m) Declares struct m _m, *m = &_m
    - * CPUMASK_FREE(m) Nop
    - *endif
    - * CPUMASK_PTR(v, m) Declares cpumask_t *v = &(m->v)
    * ------------------------------------------------------------------------
    *
    * int cpumask_scnprintf(buf, len, mask) Format cpumask for printing
    @@ -183,6 +154,14 @@ extern cpumask_t _unused_cpumask_arg_;
    #define first_cpu(src) cpumask_first(&(src))
    #define next_cpu(n, src) cpumask_next((n), &(src))
    #define any_online_cpu(mask) cpumask_any_and(&(mask), &cpu_online_map)
    +#if NR_CPUS > BITS_PER_LONG
    +#define CPUMASK_ALLOC(m) struct m *m = kmalloc(sizeof(*m), GFP_KERNEL)
    +#define CPUMASK_FREE(m) kfree(m)
    +#else
    +#define CPUMASK_ALLOC(m) struct m _m, *m = &_m
    +#define CPUMASK_FREE(m)
    +#endif
    +#define CPUMASK_PTR(v, m) cpumask_t *v = &(m->v)
    /* End deprecated region. */

    #if NR_CPUS > 1
    @@ -438,15 +417,6 @@ extern cpumask_t cpu_mask_all;
    [0] = 1UL \
    } }

    -#if NR_CPUS > BITS_PER_LONG
    -#define CPUMASK_ALLOC(m) struct m *m = kmalloc(sizeof(*m), GFP_KERNEL)
    -#define CPUMASK_FREE(m) kfree(m)
    -#else
    -#define CPUMASK_ALLOC(m) struct m _m, *m = &_m
    -#define CPUMASK_FREE(m)
    -#endif
    -#define CPUMASK_PTR(v, m) cpumask_t *v = &(m->v)
    -
    #if NR_CPUS == 1

    #define cpumask_first(src) ({ (void)(src); 0; })

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  13. [PATCH 35/35] x86: clean up speedctep-centrino and reduce cpumask_t usage From: Rusty Russell <rusty@rustcorp.com.au>

    1) The #ifdef CONFIG_HOTPLUG_CPU seems unnecessary these days.
    2) The loop can simply skip over offline cpus, rather than creating a tmp mask.
    3) set_mask is set to either a single cpu or all online cpus in a policy.
    Since it's just used for set_cpus_allowed(), any offline cpus in a policy
    don't matter, so we can just use cpumask_of_cpu() or the policy->cpus.

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c | 51 ++++++++++-------------
    1 file changed, 24 insertions(+), 27 deletions(-)

    --- linux-2.6.28.orig/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
    +++ linux-2.6.28/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
    @@ -552,9 +552,7 @@ static int centrino_verify (struct cpufr
    * Sets a new CPUFreq policy.
    */
    struct allmasks {
    - cpumask_t online_policy_cpus;
    cpumask_t saved_mask;
    - cpumask_t set_mask;
    cpumask_t covered_cpus;
    };

    @@ -568,9 +566,7 @@ static int centrino_target (struct cpufr
    int retval = 0;
    unsigned int j, k, first_cpu, tmp;
    CPUMASK_ALLOC(allmasks);
    - CPUMASK_PTR(online_policy_cpus, allmasks);
    CPUMASK_PTR(saved_mask, allmasks);
    - CPUMASK_PTR(set_mask, allmasks);
    CPUMASK_PTR(covered_cpus, allmasks);

    if (unlikely(allmasks == NULL))
    @@ -590,30 +586,28 @@ static int centrino_target (struct cpufr
    goto out;
    }

    -#ifdef CONFIG_HOTPLUG_CPU
    - /* cpufreq holds the hotplug lock, so we are safe from here on */
    - cpus_and(*online_policy_cpus, cpu_online_map, policy->cpus);
    -#else
    - *online_policy_cpus = policy->cpus;
    -#endif
    -
    *saved_mask = current->cpus_allowed;
    first_cpu = 1;
    cpus_clear(*covered_cpus);
    - for_each_cpu_mask_nr(j, *online_policy_cpus) {
    + for_each_cpu_mask_nr(j, policy->cpus) {
    + const cpumask_t *mask;
    +
    + /* cpufreq holds the hotplug lock, so we are safe here */
    + if (!cpu_online(j))
    + continue;
    +
    /*
    * Support for SMP systems.
    * Make sure we are running on CPU that wants to change freq
    */
    - cpus_clear(*set_mask);
    if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY)
    - cpus_or(*set_mask, *set_mask, *online_policy_cpus);
    + mask = &policy->cpus;
    else
    - cpu_set(j, *set_mask);
    + mask = &cpumask_of_cpu(j);

    - set_cpus_allowed_ptr(current, set_mask);
    + set_cpus_allowed_ptr(current, mask);
    preempt_disable();
    - if (unlikely(!cpu_isset(smp_processor_id(), *set_mask))) {
    + if (unlikely(!cpu_isset(smp_processor_id(), *mask))) {
    dprintk("couldn't limit to CPUs in this domain\n");
    retval = -EAGAIN;
    if (first_cpu) {
    @@ -641,7 +635,9 @@ static int centrino_target (struct cpufr
    dprintk("target=%dkHz old=%d new=%d msr=%04x\n",
    target_freq, freqs.old, freqs.new, msr);

    - for_each_cpu_mask_nr(k, *online_policy_cpus) {
    + for_each_cpu_mask_nr(k, policy->cpus) {
    + if (!cpu_online(k))
    + continue;
    freqs.cpu = k;
    cpufreq_notify_transition(&freqs,
    CPUFREQ_PRECHANGE);
    @@ -664,7 +660,9 @@ static int centrino_target (struct cpufr
    preempt_enable();
    }

    - for_each_cpu_mask_nr(k, *online_policy_cpus) {
    + for_each_cpu_mask_nr(k, policy->cpus) {
    + if (!cpu_online(k))
    + continue;
    freqs.cpu = k;
    cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
    }
    @@ -677,18 +675,17 @@ static int centrino_target (struct cpufr
    * Best effort undo..
    */

    - if (!cpus_empty(*covered_cpus))
    - for_each_cpu_mask_nr(j, *covered_cpus) {
    - set_cpus_allowed_ptr(current,
    - &cpumask_of_cpu(j));
    - wrmsr(MSR_IA32_PERF_CTL, oldmsr, h);
    - }
    + for_each_cpu_mask_nr(j, *covered_cpus) {
    + set_cpus_allowed_ptr(current, &cpumask_of_cpu(j));
    + wrmsr(MSR_IA32_PERF_CTL, oldmsr, h);
    + }

    tmp = freqs.new;
    freqs.new = freqs.old;
    freqs.old = tmp;
    - for_each_cpu_mask_nr(j, *online_policy_cpus) {
    - freqs.cpu = j;
    + for_each_cpu_mask_nr(j, policy->cpus) {
    + if (!cpu_online(j))
    + continue;
    cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
    cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
    }

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  14. Re: [PATCH 18/35] cpumask: use cpumask_bits() everywhere.-resubmit

    cpumask: use cpumask_bits() everywhere.
    From: Rusty Russell

    Instead of accessing ->bits, we use cpumask_bits(). This will be very
    useful when 'struct cpumask' has a hidden definition.

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    include/linux/cpumask.h | 70 ++++++++++++++++++++++++++++-------------------
    include/linux/seq_file.h | 2 -
    kernel/time/timer_list.c | 4 +-
    lib/cpumask.c | 4 +-
    4 files changed, 47 insertions(+), 33 deletions(-)

    --- linux-2.6.28.orig/include/linux/cpumask.h
    +++ linux-2.6.28/include/linux/cpumask.h
    @@ -194,12 +194,12 @@ extern int nr_cpu_ids;

    static inline void cpumask_set_cpu(int cpu, volatile struct cpumask *dstp)
    {
    - set_bit(cpu, dstp->bits);
    + set_bit(cpu, cpumask_bits(dstp));
    }

    static inline void cpumask_clear_cpu(int cpu, volatile struct cpumask *dstp)
    {
    - clear_bit(cpu, dstp->bits);
    + clear_bit(cpu, cpumask_bits(dstp));
    }

    /* No static inline type checking - see Subtlety (1) above. */
    @@ -207,130 +207,142 @@ static inline void cpumask_clear_cpu(int

    static inline int cpumask_test_and_set_cpu(int cpu, struct cpumask *addr)
    {
    - return test_and_set_bit(cpu, addr->bits);
    + return test_and_set_bit(cpu, cpumask_bits(addr));
    }

    static inline void cpumask_setall(struct cpumask *dstp)
    {
    - bitmap_fill(dstp->bits, nr_cpumask_bits);
    + bitmap_fill(cpumask_bits(dstp), nr_cpumask_bits);
    }

    static inline void cpumask_clear(struct cpumask *dstp)
    {
    - bitmap_zero(dstp->bits, nr_cpumask_bits);
    + bitmap_zero(cpumask_bits(dstp), nr_cpumask_bits);
    }

    static inline void cpumask_and(struct cpumask *dstp,
    const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - bitmap_and(dstp->bits, src1p->bits, src2p->bits, nr_cpumask_bits);
    + bitmap_and(cpumask_bits(dstp), cpumask_bits(src1p),
    + cpumask_bits(src2p), nr_cpumask_bits);
    }

    static inline void cpumask_or(struct cpumask *dstp, const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - bitmap_or(dstp->bits, src1p->bits, src2p->bits, nr_cpumask_bits);
    + bitmap_or(cpumask_bits(dstp), cpumask_bits(src1p),
    + cpumask_bits(src2p), nr_cpumask_bits);
    }

    static inline void cpumask_xor(struct cpumask *dstp,
    const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - bitmap_xor(dstp->bits, src1p->bits, src2p->bits, nr_cpumask_bits);
    + bitmap_xor(cpumask_bits(dstp), cpumask_bits(src1p),
    + cpumask_bits(src2p), nr_cpumask_bits);
    }

    static inline void cpumask_andnot(struct cpumask *dstp,
    const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - bitmap_andnot(dstp->bits, src1p->bits, src2p->bits, nr_cpumask_bits);
    + bitmap_andnot(cpumask_bits(dstp), cpumask_bits(src1p),
    + cpumask_bits(src2p), nr_cpumask_bits);
    }

    static inline void cpumask_complement(struct cpumask *dstp,
    const struct cpumask *srcp)
    {
    - bitmap_complement(dstp->bits, srcp->bits, nr_cpumask_bits);
    + bitmap_complement(cpumask_bits(dstp), cpumask_bits(srcp),
    + nr_cpumask_bits);
    }

    static inline int cpumask_equal(const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - return bitmap_equal(src1p->bits, src2p->bits, nr_cpumask_bits);
    + return bitmap_equal(cpumask_bits(src1p), cpumask_bits(src2p),
    + nr_cpumask_bits);
    }

    static inline int cpumask_intersects(const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - return bitmap_intersects(src1p->bits, src2p->bits, nr_cpumask_bits);
    + return bitmap_intersects(cpumask_bits(src1p), cpumask_bits(src2p),
    + nr_cpumask_bits);
    }

    static inline int cpumask_subset(const struct cpumask *src1p,
    const struct cpumask *src2p)
    {
    - return bitmap_subset(src1p->bits, src2p->bits, nr_cpumask_bits);
    + return bitmap_subset(cpumask_bits(src1p), cpumask_bits(src2p),
    + nr_cpumask_bits);
    }

    static inline int cpumask_empty(const struct cpumask *srcp)
    {
    - return bitmap_empty(srcp->bits, nr_cpumask_bits);
    + return bitmap_empty(cpumask_bits(srcp), nr_cpumask_bits);
    }

    static inline int cpumask_full(const struct cpumask *srcp)
    {
    - return bitmap_full(srcp->bits, nr_cpumask_bits);
    + return bitmap_full(cpumask_bits(srcp), nr_cpumask_bits);
    }

    static inline int __cpus_weight(const cpumask_t *srcp, int nbits)
    {
    - return bitmap_weight(srcp->bits, nbits);
    + return bitmap_weight(cpumask_bits(srcp), nbits);
    }

    static inline int cpumask_weight(const struct cpumask *srcp)
    {
    - return bitmap_weight(srcp->bits, nr_cpumask_bits);
    + return bitmap_weight(cpumask_bits(srcp), nr_cpumask_bits);
    }

    static inline void cpumask_shift_right(struct cpumask *dstp,
    const struct cpumask *srcp, int n)
    {
    - bitmap_shift_right(dstp->bits, srcp->bits, n, nr_cpumask_bits);
    + bitmap_shift_right(cpumask_bits(dstp), cpumask_bits(srcp), n,
    + nr_cpumask_bits);
    }

    static inline void cpumask_shift_left(struct cpumask *dstp,
    const struct cpumask *srcp, int n)
    {
    - bitmap_shift_left(dstp->bits, srcp->bits, n, nr_cpumask_bits);
    + bitmap_shift_left(cpumask_bits(dstp), cpumask_bits(srcp), n,
    + nr_cpumask_bits);
    }

    static inline int cpumask_scnprintf(char *buf, int len,
    const struct cpumask *srcp)
    {
    - return bitmap_scnprintf(buf, len, srcp->bits, nr_cpumask_bits);
    + return bitmap_scnprintf(buf, len, cpumask_bits(srcp), nr_cpumask_bits);
    }

    static inline int cpumask_parse_user(const char __user *buf, int len,
    struct cpumask *dstp)
    {
    - return bitmap_parse_user(buf, len, dstp->bits, nr_cpumask_bits);
    + return bitmap_parse_user(buf, len, cpumask_bits(dstp), nr_cpumask_bits);
    }

    static inline int cpulist_scnprintf(char *buf, int len,
    const struct cpumask *srcp)
    {
    - return bitmap_scnlistprintf(buf, len, srcp->bits, nr_cpumask_bits);
    + return bitmap_scnlistprintf(buf, len, cpumask_bits(srcp),
    + nr_cpumask_bits);
    }

    static inline int cpulist_parse(const char *buf, struct cpumask *dstp)
    {
    - return bitmap_parselist(buf, dstp->bits, nr_cpumask_bits);
    + return bitmap_parselist(buf, cpumask_bits(dstp), nr_cpumask_bits);
    }

    static inline int cpumask_cpuremap(int oldbit,
    const struct cpumask *oldp,
    const struct cpumask *newp)
    {
    - return bitmap_bitremap(oldbit, oldp->bits, newp->bits, nr_cpumask_bits);
    + return bitmap_bitremap(oldbit, cpumask_bits(oldp), cpumask_bits(newp),
    + nr_cpumask_bits);
    }

    static inline void cpumask_remap(struct cpumask *dstp,
    @@ -338,21 +350,23 @@ static inline void cpumask_remap(struct
    const struct cpumask *oldp,
    const struct cpumask *newp)
    {
    - bitmap_remap(dstp->bits, srcp->bits, oldp->bits, newp->bits,
    - nr_cpumask_bits);
    + bitmap_remap(cpumask_bits(dstp), cpumask_bits(srcp),
    + cpumask_bits(oldp), cpumask_bits(newp), nr_cpumask_bits);
    }

    static inline void cpumask_onto(struct cpumask *dstp,
    const struct cpumask *origp,
    const struct cpumask *relmapp)
    {
    - bitmap_onto(dstp->bits, origp->bits, relmapp->bits, nr_cpumask_bits);
    + bitmap_onto(cpumask_bits(dstp), cpumask_bits(origp),
    + cpumask_bits(relmapp), nr_cpumask_bits);
    }

    static inline void cpumask_fold(struct cpumask *dstp,
    const struct cpumask *origp, int sz)
    {
    - bitmap_fold(dstp->bits, origp->bits, sz, nr_cpumask_bits);
    + bitmap_fold(cpumask_bits(dstp), cpumask_bits(origp), sz,
    + nr_cpumask_bits);
    }

    static inline void cpumask_copy(struct cpumask *dstp,
    --- linux-2.6.28.orig/include/linux/seq_file.h
    +++ linux-2.6.28/include/linux/seq_file.h
    @@ -52,7 +52,7 @@ int seq_path_root(struct seq_file *m, st
    int seq_bitmap(struct seq_file *m, unsigned long *bits, unsigned int nr_bits);
    static inline int seq_cpumask(struct seq_file *m, cpumask_t *mask)
    {
    - return seq_bitmap(m, mask->bits, nr_cpu_ids);
    + return seq_bitmap(m, cpumask_bits(mask), nr_cpu_ids);
    }

    static inline int seq_nodemask(struct seq_file *m, nodemask_t *mask)
    --- linux-2.6.28.orig/kernel/time/timer_list.c
    +++ linux-2.6.28/kernel/time/timer_list.c
    @@ -232,10 +232,10 @@ static void timer_list_show_tickdevices(
    #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
    print_tickdevice(m, tick_get_broadcast_device(), -1);
    SEQ_printf(m, "tick_broadcast_mask: %08lx\n",
    - tick_get_broadcast_mask()->bits[0]);
    + cpumask_bits(tick_get_broadcast_mask())[0]);
    #ifdef CONFIG_TICK_ONESHOT
    SEQ_printf(m, "tick_broadcast_oneshot_mask: %08lx\n",
    - tick_get_broadcast_oneshot_mask()->bits[0]);
    + cpumask_bits(tick_get_broadcast_oneshot_mask())[0]);
    #endif
    SEQ_printf(m, "\n");
    #endif
    --- linux-2.6.28.orig/lib/cpumask.c
    +++ linux-2.6.28/lib/cpumask.c
    @@ -5,13 +5,13 @@

    int __first_cpu(const cpumask_t *srcp)
    {
    - return find_first_bit(srcp->bits, nr_cpumask_bits);
    + return find_first_bit(cpumask_bits(srcp), nr_cpumask_bits);
    }
    EXPORT_SYMBOL(__first_cpu);

    int __next_cpu(int n, const cpumask_t *srcp)
    {
    - return find_next_bit(srcp->bits, nr_cpumask_bits, n+1);
    + return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
    }
    EXPORT_SYMBOL(__next_cpu);


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  15. [PATCH 21/35] cpumask: cpumask_first/cpumask_next From: Rusty Russell <rusty@rustcorp.com.au>

    Pointer-taking variants of first_cpu/next_cpu.

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    include/linux/cpumask.h | 18 +++++++++---------
    lib/cpumask.c | 10 +++++-----
    2 files changed, 14 insertions(+), 14 deletions(-)

    --- linux-2.6.28.orig/include/linux/cpumask.h
    +++ linux-2.6.28/include/linux/cpumask.h
    @@ -46,8 +46,8 @@
    * void cpumask_shift_right(dst, src, n) Shift right
    * void cpumask_shift_left(dst, src, n) Shift left
    *
    - * int first_cpu(mask) Number lowest set bit, or >= nr_cpu_ids
    - * int next_cpu(cpu, mask) Next cpu past 'cpu', or >= nr_cpu_ids
    + * int cpumask_first(mask) Number lowest set bit, or >= nr_cpu_ids
    + * int cpumask_next(cpu, mask) Next cpu past 'cpu', or >= nr_cpu_ids
    *
    * void cpumask_copy(dmask, smask) dmask = smask
    *
    @@ -178,6 +178,8 @@ extern cpumask_t _unused_cpumask_arg_;
    #define for_each_cpu_mask(cpu, mask) for_each_cpu(cpu, &(mask))
    #define for_each_cpu_mask_and(cpu, mask, and) \
    for_each_cpu_and(cpu, &(mask), &(and))
    +#define first_cpu(src) cpumask_first(&(src))
    +#define next_cpu(n, src) cpumask_next((n), &(src))
    /* End deprecated region. */

    #if NR_CPUS > 1
    @@ -441,8 +443,8 @@ extern cpumask_t cpu_mask_all;

    #if NR_CPUS == 1

    -#define first_cpu(src) ({ (void)(src); 0; })
    -#define next_cpu(n, src) ({ (void)(src); 1; })
    +#define cpumask_first(src) ({ (void)(src); 0; })
    +#define cpumask_next(n, src) ({ (void)(src); 1; })
    #define cpumask_next_and(n, srcp, andp) ({ (void)(srcp), (void)(andp); 1; })
    #define any_online_cpu(mask) 0

    @@ -453,18 +455,16 @@ extern cpumask_t cpu_mask_all;

    #else /* NR_CPUS > 1 */

    -int __first_cpu(const cpumask_t *srcp);
    -int __next_cpu(int n, const cpumask_t *srcp);
    +int cpumask_first(const cpumask_t *srcp);
    +int cpumask_next(int n, const cpumask_t *srcp);
    int cpumask_next_and(int n, const cpumask_t *srcp, const cpumask_t *andp);
    int __any_online_cpu(const cpumask_t *mask);

    -#define first_cpu(src) __first_cpu(&(src))
    -#define next_cpu(n, src) __next_cpu((n), &(src))
    #define any_online_cpu(mask) __any_online_cpu(&(mask))

    #define for_each_cpu(cpu, mask) \
    for ((cpu) = -1; \
    - (cpu) = __next_cpu((cpu), (mask)), \
    + (cpu) = cpumask_next((cpu), (mask)), \
    (cpu) < nr_cpu_ids
    #define for_each_cpu_and(cpu, mask, and) \
    for ((cpu) = -1; \
    --- linux-2.6.28.orig/lib/cpumask.c
    +++ linux-2.6.28/lib/cpumask.c
    @@ -3,21 +3,21 @@
    #include
    #include

    -int __first_cpu(const cpumask_t *srcp)
    +int cpumask_first(const cpumask_t *srcp)
    {
    return find_first_bit(cpumask_bits(srcp), nr_cpumask_bits);
    }
    -EXPORT_SYMBOL(__first_cpu);
    +EXPORT_SYMBOL(cpumask_first);

    -int __next_cpu(int n, const cpumask_t *srcp)
    +int cpumask_next(int n, const cpumask_t *srcp)
    {
    return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
    }
    -EXPORT_SYMBOL(__next_cpu);
    +EXPORT_SYMBOL(cpumask_next);

    int cpumask_next_and(int n, const cpumask_t *srcp, const cpumask_t *andp)
    {
    - while ((n = next_cpu(n, *srcp)) < nr_cpu_ids)
    + while ((n = cpumask_next(n, srcp)) < nr_cpu_ids)
    if (cpumask_test_cpu(n, andp))
    break;
    return n;

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  16. [PATCH 25/35] cpumask: get rid of boutique sched.c allocations, use cpumask_var_t. From: Rusty Russell <rusty@rustcorp.com.au>

    Using lots of allocs rather than one big alloc is less efficient, but
    who cares for this setup function?

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    kernel/sched.c | 133 ++++++++++++++++++++++++---------------------------------
    1 file changed, 56 insertions(+), 77 deletions(-)

    --- linux-2.6.28.orig/kernel/sched.c
    +++ linux-2.6.28/kernel/sched.c
    @@ -7292,40 +7292,6 @@ SD_INIT_FUNC(CPU)
    SD_INIT_FUNC(MC)
    #endif

    -/*
    - * To minimize stack usage kmalloc room for cpumasks and share the
    - * space as the usage in build_sched_domains() dictates. Used only
    - * if the amount of space is significant.
    - */
    -struct allmasks {
    - cpumask_t tmpmask; /* make this one first */
    - union {
    - cpumask_t nodemask;
    - cpumask_t this_sibling_map;
    - cpumask_t this_core_map;
    - };
    - cpumask_t send_covered;
    -
    -#ifdef CONFIG_NUMA
    - cpumask_t domainspan;
    - cpumask_t covered;
    - cpumask_t notcovered;
    -#endif
    -};
    -
    -#if NR_CPUS > 128
    -#define SCHED_CPUMASK_ALLOC 1
    -#define SCHED_CPUMASK_FREE(v) kfree(v)
    -#define SCHED_CPUMASK_DECLARE(v) struct allmasks *v
    -#else
    -#define SCHED_CPUMASK_ALLOC 0
    -#define SCHED_CPUMASK_FREE(v)
    -#define SCHED_CPUMASK_DECLARE(v) struct allmasks _v, *v = &_v
    -#endif
    -
    -#define SCHED_CPUMASK_VAR(v, a) cpumask_t *v = (cpumask_t *) \
    - ((unsigned long)(a) + offsetof(struct allmasks, v))
    -
    static int default_relax_domain_level = -1;

    static int __init setup_relax_domain_level(char *str)
    @@ -7368,14 +7334,35 @@ static void set_domain_attribute(struct
    static int __build_sched_domains(const cpumask_t *cpu_map,
    struct sched_domain_attr *attr)
    {
    - int i;
    + int i, err = -ENOMEM;
    struct root_domain *rd;
    - SCHED_CPUMASK_DECLARE(allmasks);
    - cpumask_t *tmpmask;
    + cpumask_var_t nodemask, this_sibling_map, this_core_map, send_covered,
    + tmpmask;
    #ifdef CONFIG_NUMA
    + cpumask_var_t domainspan, covered, notcovered;
    struct sched_group **sched_group_nodes = NULL;
    int sd_allnodes = 0;

    + if (!alloc_cpumask_var(&domainspan, GFP_KERNEL))
    + goto out;
    + if (!alloc_cpumask_var(&covered, GFP_KERNEL))
    + goto free_domainspan;
    + if (!alloc_cpumask_var(&notcovered, GFP_KERNEL))
    + goto free_covered;
    +#endif
    +
    + if (!alloc_cpumask_var(&nodemask, GFP_KERNEL))
    + goto free_notcovered;
    + if (!alloc_cpumask_var(&this_sibling_map, GFP_KERNEL))
    + goto free_nodemask;
    + if (!alloc_cpumask_var(&this_core_map, GFP_KERNEL))
    + goto free_this_sibling_map;
    + if (!alloc_cpumask_var(&send_covered, GFP_KERNEL))
    + goto free_this_core_map;
    + if (!alloc_cpumask_var(&tmpmask, GFP_KERNEL))
    + goto free_send_covered;
    +
    +#ifdef CONFIG_NUMA
    /*
    * Allocate the per-node list of sched groups
    */
    @@ -7383,34 +7370,16 @@ static int __build_sched_domains(const c
    GFP_KERNEL);
    if (!sched_group_nodes) {
    printk(KERN_WARNING "Can not alloc sched group node list\n");
    - return -ENOMEM;
    + goto free_tmpmask;
    }
    #endif

    rd = alloc_rootdomain();
    if (!rd) {
    printk(KERN_WARNING "Cannot alloc root domain\n");
    -#ifdef CONFIG_NUMA
    - kfree(sched_group_nodes);
    -#endif
    - return -ENOMEM;
    + goto free_sched_groups;
    }

    -#if SCHED_CPUMASK_ALLOC
    - /* get space for all scratch cpumask variables */
    - allmasks = kmalloc(sizeof(*allmasks), GFP_KERNEL);
    - if (!allmasks) {
    - printk(KERN_WARNING "Cannot alloc cpumask array\n");
    - kfree(rd);
    -#ifdef CONFIG_NUMA
    - kfree(sched_group_nodes);
    -#endif
    - return -ENOMEM;
    - }
    -#endif
    - tmpmask = (cpumask_t *)allmasks;
    -
    -
    #ifdef CONFIG_NUMA
    sched_group_nodes_bycpu[first_cpu(*cpu_map)] = sched_group_nodes;
    #endif
    @@ -7420,7 +7389,6 @@ static int __build_sched_domains(const c
    */
    for_each_cpu(i, cpu_map) {
    struct sched_domain *sd = NULL, *p;
    - SCHED_CPUMASK_VAR(nodemask, allmasks);

    *nodemask = node_to_cpumask(cpu_to_node(i));
    cpus_and(*nodemask, *nodemask, *cpu_map);
    @@ -7486,9 +7454,6 @@ static int __build_sched_domains(const c
    #ifdef CONFIG_SCHED_SMT
    /* Set up CPU (sibling) groups */
    for_each_cpu(i, cpu_map) {
    - SCHED_CPUMASK_VAR(this_sibling_map, allmasks);
    - SCHED_CPUMASK_VAR(send_covered, allmasks);
    -
    *this_sibling_map = per_cpu(cpu_sibling_map, i);
    cpus_and(*this_sibling_map, *this_sibling_map, *cpu_map);
    if (i != first_cpu(*this_sibling_map))
    @@ -7503,9 +7468,6 @@ static int __build_sched_domains(const c
    #ifdef CONFIG_SCHED_MC
    /* Set up multi-core groups */
    for_each_cpu(i, cpu_map) {
    - SCHED_CPUMASK_VAR(this_core_map, allmasks);
    - SCHED_CPUMASK_VAR(send_covered, allmasks);
    -
    *this_core_map = cpu_coregroup_map(i);
    cpus_and(*this_core_map, *this_core_map, *cpu_map);
    if (i != first_cpu(*this_core_map))
    @@ -7519,9 +7481,6 @@ static int __build_sched_domains(const c

    /* Set up physical groups */
    for (i = 0; i < nr_node_ids; i++) {
    - SCHED_CPUMASK_VAR(nodemask, allmasks);
    - SCHED_CPUMASK_VAR(send_covered, allmasks);
    -
    *nodemask = node_to_cpumask(i);
    cpus_and(*nodemask, *nodemask, *cpu_map);
    if (cpus_empty(*nodemask))
    @@ -7535,8 +7494,6 @@ static int __build_sched_domains(const c
    #ifdef CONFIG_NUMA
    /* Set up node groups */
    if (sd_allnodes) {
    - SCHED_CPUMASK_VAR(send_covered, allmasks);
    -
    init_sched_build_groups(cpu_map, cpu_map,
    &cpu_to_allnodes_group,
    send_covered, tmpmask);
    @@ -7545,9 +7502,6 @@ static int __build_sched_domains(const c
    for (i = 0; i < nr_node_ids; i++) {
    /* Set up node groups */
    struct sched_group *sg, *prev;
    - SCHED_CPUMASK_VAR(nodemask, allmasks);
    - SCHED_CPUMASK_VAR(domainspan, allmasks);
    - SCHED_CPUMASK_VAR(covered, allmasks);
    int j;

    *nodemask = node_to_cpumask(i);
    @@ -7582,7 +7536,6 @@ static int __build_sched_domains(const c
    prev = sg;

    for (j = 0; j < nr_node_ids; j++) {
    - SCHED_CPUMASK_VAR(notcovered, allmasks);
    int n = (i + j) % nr_node_ids;
    node_to_cpumask_ptr(pnodemask, n);

    @@ -7661,14 +7614,40 @@ static int __build_sched_domains(const c
    cpu_attach_domain(sd, rd, i);
    }

    - SCHED_CPUMASK_FREE((void *)allmasks);
    - return 0;
    + err = 0;
    +
    +free_tmpmask:
    + free_cpumask_var(tmpmask);
    +free_send_covered:
    + free_cpumask_var(send_covered);
    +free_this_core_map:
    + free_cpumask_var(this_core_map);
    +free_this_sibling_map:
    + free_cpumask_var(this_sibling_map);
    +free_nodemask:
    + free_cpumask_var(nodemask);
    +free_notcovered:
    +#ifdef CONFIG_NUMA
    + free_cpumask_var(notcovered);
    +free_covered:
    + free_cpumask_var(covered);
    +free_domainspan:
    + free_cpumask_var(domainspan);
    +out:
    +#endif
    + return err;
    +
    +free_sched_groups:
    +#ifdef CONFIG_NUMA
    + kfree(sched_group_nodes);
    +#endif
    + goto free_tmpmask;

    #ifdef CONFIG_NUMA
    error:
    free_sched_groups(cpu_map, tmpmask);
    - SCHED_CPUMASK_FREE((void *)allmasks);
    - return -ENOMEM;
    + kfree(rd);
    + goto free_tmpmask;
    #endif
    }


    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  17. [PATCH 22/35] cpumask: deprecate any_online_cpu() in favour of cpumask_any/cpumask_any_and From: Rusty Russell <rusty@rustcorp.com.au>

    any_online_cpu() is a good name, but it takes a cpumask_t, not a
    pointer.

    Signed-off-by: Rusty Russell
    Signed-off-by: Mike Travis
    ---
    include/linux/cpumask.h | 11 ++++++-----
    lib/cpumask.c | 12 ------------
    2 files changed, 6 insertions(+), 17 deletions(-)

    --- linux-2.6.28.orig/include/linux/cpumask.h
    +++ linux-2.6.28/include/linux/cpumask.h
    @@ -108,7 +108,8 @@
    * int cpu_possible(cpu) Is some cpu possible?
    * int cpu_present(cpu) Is some cpu present (can schedule)?
    *
    - * int any_online_cpu(mask) First online cpu in mask
    + * int cpumask_any(mask) Any cpu in mask
    + * int cpumask_any_and(mask1,mask2) Any cpu in both masks
    *
    * for_each_possible_cpu(cpu) for-loop cpu over cpu_possible_map
    * for_each_online_cpu(cpu) for-loop cpu over cpu_online_map
    @@ -180,6 +181,7 @@ extern cpumask_t _unused_cpumask_arg_;
    for_each_cpu_and(cpu, &(mask), &(and))
    #define first_cpu(src) cpumask_first(&(src))
    #define next_cpu(n, src) cpumask_next((n), &(src))
    +#define any_online_cpu(mask) cpumask_any_and(&(mask), &cpu_online_map)
    /* End deprecated region. */

    #if NR_CPUS > 1
    @@ -380,6 +382,9 @@ static inline void cpumask_copy(struct c
    bitmap_copy(cpumask_bits(dstp), cpumask_bits(srcp), nr_cpumask_bits);
    }

    +#define cpumask_any(srcp) cpumask_first(srcp)
    +#define cpumask_any_and(mask1, mask2) cpumask_first_and((mask1), (mask2))
    +
    /*
    * Special-case data structure for "single bit set only" constant CPU masks.
    *
    @@ -446,7 +451,6 @@ extern cpumask_t cpu_mask_all;
    #define cpumask_first(src) ({ (void)(src); 0; })
    #define cpumask_next(n, src) ({ (void)(src); 1; })
    #define cpumask_next_and(n, srcp, andp) ({ (void)(srcp), (void)(andp); 1; })
    -#define any_online_cpu(mask) 0

    #define for_each_cpu(cpu, mask) \
    for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
    @@ -458,9 +462,6 @@ extern cpumask_t cpu_mask_all;
    int cpumask_first(const cpumask_t *srcp);
    int cpumask_next(int n, const cpumask_t *srcp);
    int cpumask_next_and(int n, const cpumask_t *srcp, const cpumask_t *andp);
    -int __any_online_cpu(const cpumask_t *mask);
    -
    -#define any_online_cpu(mask) __any_online_cpu(&(mask))

    #define for_each_cpu(cpu, mask) \
    for ((cpu) = -1; \
    --- linux-2.6.28.orig/lib/cpumask.c
    +++ linux-2.6.28/lib/cpumask.c
    @@ -24,18 +24,6 @@ int cpumask_next_and(int n, const cpumas
    }
    EXPORT_SYMBOL(cpumask_next_and);

    -int __any_online_cpu(const cpumask_t *mask)
    -{
    - int cpu;
    -
    - for_each_cpu(cpu, mask) {
    - if (cpu_online(cpu))
    - break;
    - }
    - return cpu;
    -}
    -EXPORT_SYMBOL(__any_online_cpu);
    -
    /* These are not inline because of header tangles. */
    #ifdef CONFIG_CPUMASK_OFFSTACK
    bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags)

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  18. Re: [PATCH 34/35] cpumask: Use accessors code. From: Rusty Russell <rusty@rustcorp.com.au>

    On Thursday 23 October 2008 13:09:00 Mike Travis wrote:
    > Use the accessors rather than frobbing bits directly. Most of this is
    > in arch code I haven't even compiled, but it is mostly straightforward.


    OK, FYI this one isn't for this merge window: I'll be splitting it and sending
    it via the various arch maintainers.

    Also, I think a \n got lost, looking at the subject

    Cheers,
    Rusty.
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  19. Re: [PATCH 35/35] x86: clean up speedctep-centrino and reduce cpumask_t usage From: Rusty Russell <rusty@rustcorp.com.au>

    On Thursday 23 October 2008 13:09:01 Mike Travis wrote:
    > 1) The #ifdef CONFIG_HOTPLUG_CPU seems unnecessary these days.
    > 2) The loop can simply skip over offline cpus, rather than creating a tmp
    > mask.
    > 3) set_mask is set to either a single cpu or all online cpus in a
    > policy. Since it's just used for set_cpus_allowed(), any offline cpus in a
    > policy don't matter, so we can just use cpumask_of_cpu() or the
    > policy->cpus.


    Note that this cleanup stands alone; it's just that I read this code I
    couldn't help but tidy it up.

    Ingo: do you just want to put this in your normal tree for sending to Linus?

    Thanks,
    Rusty.
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  20. Re: [PATCH 22/35] cpumask: deprecate any_online_cpu() in favour of cpumask_any/cpumask_any_and From: Rusty Russell <rusty@rustcorp.com.au>


    * Mike Travis wrote:

    > any_online_cpu() is a good name, but it takes a cpumask_t, not a
    > pointer.
    >
    > Signed-off-by: Rusty Russell
    > Signed-off-by: Mike Travis


    almost all the patches have a missing \n and thus wrong authorship...

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

+ Reply to Thread
Page 2 of 4 FirstFirst 1 2 3 4 LastLast