[PATCH] [1/20] x86: Make ptrace.h safe to include from assembler code - Kernel

This is a discussion on [PATCH] [1/20] x86: Make ptrace.h safe to include from assembler code - Kernel ; Signed-off-by: Andi Kleen --- include/asm-x86/ptrace-abi.h | 2 ++ 1 file changed, 2 insertions(+) Index: linux/include/asm-x86/ptrace-abi.h ================================================== ================= --- linux.orig/include/asm-x86/ptrace-abi.h +++ linux/include/asm-x86/ptrace-abi.h @@ -80,6 +80,7 @@ #define PTRACE_SINGLEBLOCK 33 /* resume execution until next branch */ +#ifndef __ASSEMBLY__ /* configuration/status structure ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 27

Thread: [PATCH] [1/20] x86: Make ptrace.h safe to include from assembler code

  1. [PATCH] [1/20] x86: Make ptrace.h safe to include from assembler code


    Signed-off-by: Andi Kleen

    ---
    include/asm-x86/ptrace-abi.h | 2 ++
    1 file changed, 2 insertions(+)

    Index: linux/include/asm-x86/ptrace-abi.h
    ================================================== =================
    --- linux.orig/include/asm-x86/ptrace-abi.h
    +++ linux/include/asm-x86/ptrace-abi.h
    @@ -80,6 +80,7 @@

    #define PTRACE_SINGLEBLOCK 33 /* resume execution until next branch */

    +#ifndef __ASSEMBLY__
    /* configuration/status structure used in PTRACE_BTS_CONFIG and
    PTRACE_BTS_STATUS commands.
    */
    @@ -91,6 +92,7 @@ struct ptrace_bts_config {
    /* buffer overflow signal */
    unsigned int signal;
    };
    +#endif

    #define PTRACE_BTS_O_TRACE 0x1 /* branch trace */
    #define PTRACE_BTS_O_SCHED 0x2 /* scheduling events w/ jiffies */
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. [PATCH] [7/20] x86: Remove the now unused X86_FEATURE_SYNC_RDTSC

    Signed-off-by: Andi Kleen

    ---
    include/asm-x86/cpufeature.h | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

    Index: linux/include/asm-x86/cpufeature.h
    ================================================== =================
    --- linux.orig/include/asm-x86/cpufeature.h
    +++ linux/include/asm-x86/cpufeature.h
    @@ -77,7 +77,7 @@
    #define X86_FEATURE_PEBS (3*32+12) /* Precise-Event Based Sampling */
    #define X86_FEATURE_BTS (3*32+13) /* Branch Trace Store */
    /* 14 free */
    -#define X86_FEATURE_SYNC_RDTSC (3*32+15) /* RDTSC synchronizes the CPU */
    +/* 15 free */
    #define X86_FEATURE_REP_GOOD (3*32+16) /* rep microcode works well on this CPU */
    #define X86_FEATURE_MFENCE_RDTSC (3*32+17) /* Mfence synchronizes RDTSC */
    #define X86_FEATURE_LFENCE_RDTSC (3*32+18) /* Lfence synchronizes RDTSC */
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. [PATCH] [13/20] x86: Use a deferrable timer for the correctable machine check poller


    They are not time critical and delaying them a little for
    the next regular wakeup is no problem.

    Also when a CPU is idle then it is unlikely to generate
    errors anyways, so it is ok to check only when the CPU
    is actually doing something.

    Signed-off-by: Andi Kleen

    ---
    arch/x86/kernel/cpu/mcheck/mce_64.c | 8 +++-----
    1 file changed, 3 insertions(+), 5 deletions(-)

    Index: linux/arch/x86/kernel/cpu/mcheck/mce_64.c
    ================================================== =================
    --- linux.orig/arch/x86/kernel/cpu/mcheck/mce_64.c
    +++ linux/arch/x86/kernel/cpu/mcheck/mce_64.c
    @@ -379,8 +379,7 @@ static void mcheck_timer(struct work_str
    if (mce_notify_user()) {
    next_interval = max(next_interval/2, HZ/100);
    } else {
    - next_interval = min(next_interval * 2,
    - (int)round_jiffies_relative(check_interval*HZ));
    + next_interval = min(next_interval * 2, check_interval*HZ);
    }

    cpu = smp_processor_id();
    @@ -442,8 +441,7 @@ static void mce_timers(int restart)
    struct delayed_work *w = &per_cpu(mcheck_work, i);
    cancel_delayed_work_sync(w);
    if (restart)
    - schedule_delayed_work_on(i, w,
    - round_jiffies_relative(next_interval));
    + schedule_delayed_work_on(i, w, next_interval);
    }
    }

    @@ -553,7 +551,7 @@ void __cpuinit mcheck_init(struct cpuinf
    int cpu = smp_processor_id();
    static cpumask_t mce_cpus = CPU_MASK_NONE;

    - INIT_DELAYED_WORK(&per_cpu(mcheck_work, cpu), mcheck_timer);
    + INIT_DELAYED_WORK_DEFERRABLE(&per_cpu(mcheck_work, cpu), mcheck_timer);

    mce_cpu_quirks(c);

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  4. [PATCH] [5/20] x86: Introduce nsec_barrier()


    nsec_barrier() is a new barrier primitive that stops RDTSC speculation
    to avoid races with timer interrupts on other CPUs.

    Add it to all architectures. Except for x86 it is a nop right now.
    I only tested x86, but it's a very simple change.

    On x86 it expands either to LFENCE (for Intel CPUs) or MFENCE (for
    AMD CPUs) which stops RDTSC on all currently known microarchitectures
    that implement SSE. On CPUs without SSE there is generally no RDTSC
    speculation.

    Signed-off-by: Andi Kleen

    ---
    arch/x86/kernel/vsyscall_64.c | 2 ++
    include/asm-alpha/barrier.h | 2 ++
    include/asm-arm/system.h | 1 +
    include/asm-avr32/system.h | 1 +
    include/asm-blackfin/system.h | 1 +
    include/asm-cris/system.h | 1 +
    include/asm-frv/system.h | 1 +
    include/asm-h8300/system.h | 1 +
    include/asm-ia64/system.h | 1 +
    include/asm-m32r/system.h | 1 +
    include/asm-m68k/system.h | 1 +
    include/asm-m68knommu/system.h | 1 +
    include/asm-mips/barrier.h | 2 ++
    include/asm-parisc/system.h | 1 +
    include/asm-powerpc/system.h | 1 +
    include/asm-ppc/system.h | 1 +
    include/asm-s390/system.h | 1 +
    include/asm-sh/system.h | 1 +
    include/asm-sparc/system.h | 1 +
    include/asm-sparc64/system.h | 2 ++
    include/asm-v850/system.h | 2 ++
    include/asm-x86/system.h | 11 +++++++++++
    include/asm-xtensa/system.h | 1 +
    kernel/time/timekeeping.c | 2 ++
    24 files changed, 40 insertions(+)

    Index: linux/arch/x86/kernel/vsyscall_64.c
    ================================================== =================
    --- linux.orig/arch/x86/kernel/vsyscall_64.c
    +++ linux/arch/x86/kernel/vsyscall_64.c
    @@ -126,6 +126,7 @@ static __always_inline void do_vgettimeo
    cycle_t (*vread)(void);
    do {
    seq = read_seqbegin(&__vsyscall_gtod_data.lock);
    + nsec_barrier();

    vread = __vsyscall_gtod_data.clock.vread;
    if (unlikely(!__vsyscall_gtod_data.sysctl_enabled || !vread)) {
    @@ -140,6 +141,7 @@ static __always_inline void do_vgettimeo

    tv->tv_sec = __vsyscall_gtod_data.wall_time_sec;
    nsec = __vsyscall_gtod_data.wall_time_nsec;
    + nsec_barrier();
    } while (read_seqretry(&__vsyscall_gtod_data.lock, seq));

    /* calculate interval: */
    Index: linux/kernel/time/timekeeping.c
    ================================================== =================
    --- linux.orig/kernel/time/timekeeping.c
    +++ linux/kernel/time/timekeeping.c
    @@ -94,10 +94,12 @@ void getnstimeofday(struct timespec *ts)

    do {
    seq = read_seqbegin(&xtime_lock);
    + nsec_barrier();

    *ts = xtime;
    nsecs = __get_nsec_offset();

    + nsec_barrier();
    } while (read_seqretry(&xtime_lock, seq));

    timespec_add_ns(ts, nsecs);
    Index: linux/include/asm-x86/system.h
    ================================================== =================
    --- linux.orig/include/asm-x86/system.h
    +++ linux/include/asm-x86/system.h
    @@ -5,6 +5,7 @@
    #include
    #include
    #include
    +#include

    #include
    #include
    @@ -395,5 +396,15 @@ void default_idle(void);
    #define set_mb(var, value) do { var = value; barrier(); } while (0)
    #endif

    +/* Stop RDTSC speculation. This is needed when you need to use RDTSC
    + * (or get_cycles or vread that possibly accesses the TSC) in a defined
    + * code region.
    + * Could use an alternative three way for this if there was one.
    + */
    +static inline void nsec_barrier(void)
    +{
    + alternative(ASM_NOP3, "mfence", X86_FEATURE_MFENCE_RDTSC);
    + alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC);
    +}

    #endif
    Index: linux/include/asm-alpha/barrier.h
    ================================================== =================
    --- linux.orig/include/asm-alpha/barrier.h
    +++ linux/include/asm-alpha/barrier.h
    @@ -15,6 +15,8 @@ __asm__ __volatile__("wmb": : :"memory")
    #define read_barrier_depends() \
    __asm__ __volatile__("mb": : :"memory")

    +#define nsec_barrier() barrier()
    +
    #ifdef CONFIG_SMP
    #define smp_mb() mb()
    #define smp_rmb() rmb()
    Index: linux/include/asm-arm/system.h
    ================================================== =================
    --- linux.orig/include/asm-arm/system.h
    +++ linux/include/asm-arm/system.h
    @@ -191,6 +191,7 @@ extern unsigned int user_debug;
    #endif
    #define read_barrier_depends() do { } while(0)
    #define smp_read_barrier_depends() do { } while(0)
    +#define nsec_barrier() barrier()

    #define set_mb(var, value) do { var = value; smp_mb(); } while (0)
    #define nop() __asm__ __volatile__("mov\tr0,r0\t@ nop\n\t");
    Index: linux/include/asm-avr32/system.h
    ================================================== =================
    --- linux.orig/include/asm-avr32/system.h
    +++ linux/include/asm-avr32/system.h
    @@ -25,6 +25,7 @@
    #define wmb() asm volatile("sync 0" : : : "memory")
    #define read_barrier_depends() do { } while(0)
    #define set_mb(var, value) do { var = value; mb(); } while(0)
    +#define nsec_barrier() barrier();

    /*
    * Help PathFinder and other Nexus-compliant debuggers keep track of
    Index: linux/include/asm-blackfin/system.h
    ================================================== =================
    --- linux.orig/include/asm-blackfin/system.h
    +++ linux/include/asm-blackfin/system.h
    @@ -128,6 +128,7 @@ extern unsigned long irq_flags;
    #define mb() asm volatile ("" : : :"memory")
    #define rmb() asm volatile ("" : : :"memory")
    #define wmb() asm volatile ("" : : :"memory")
    +#define nsec_barrier() barrier()
    #define set_mb(var, value) do { (void) xchg(&var, value); } while (0)

    #define read_barrier_depends() do { } while(0)
    Index: linux/include/asm-cris/system.h
    ================================================== =================
    --- linux.orig/include/asm-cris/system.h
    +++ linux/include/asm-cris/system.h
    @@ -15,6 +15,7 @@ extern struct task_struct *resume(struct
    #define mb() barrier()
    #define rmb() mb()
    #define wmb() mb()
    +#define nsec_barrier() barrier()
    #define read_barrier_depends() do { } while(0)
    #define set_mb(var, value) do { var = value; mb(); } while (0)

    Index: linux/include/asm-frv/system.h
    ================================================== =================
    --- linux.orig/include/asm-frv/system.h
    +++ linux/include/asm-frv/system.h
    @@ -179,6 +179,7 @@ do { \
    #define rmb() asm volatile ("membar" : : :"memory")
    #define wmb() asm volatile ("membar" : : :"memory")
    #define set_mb(var, value) do { var = value; mb(); } while (0)
    +#define nsec_barrier() barrier()

    #define smp_mb() mb()
    #define smp_rmb() rmb()
    Index: linux/include/asm-h8300/system.h
    ================================================== =================
    --- linux.orig/include/asm-h8300/system.h
    +++ linux/include/asm-h8300/system.h
    @@ -83,6 +83,7 @@ asmlinkage void resume(void);
    #define rmb() asm volatile ("" : : :"memory")
    #define wmb() asm volatile ("" : : :"memory")
    #define set_mb(var, value) do { xchg(&var, value); } while (0)
    +#define nsec_barrier() barrier()

    #ifdef CONFIG_SMP
    #define smp_mb() mb()
    Index: linux/include/asm-ia64/system.h
    ================================================== =================
    --- linux.orig/include/asm-ia64/system.h
    +++ linux/include/asm-ia64/system.h
    @@ -86,6 +86,7 @@ extern struct ia64_boot_param {
    #define rmb() mb()
    #define wmb() mb()
    #define read_barrier_depends() do { } while(0)
    +#define nsec_barrier() barrier()

    #ifdef CONFIG_SMP
    # define smp_mb() mb()
    Index: linux/include/asm-m32r/system.h
    ================================================== =================
    --- linux.orig/include/asm-m32r/system.h
    +++ linux/include/asm-m32r/system.h
    @@ -267,6 +267,7 @@ __cmpxchg(volatile void *ptr, unsigned l
    #define mb() barrier()
    #define rmb() mb()
    #define wmb() mb()
    +#define nsec_barrier() barrier()

    /**
    * read_barrier_depends - Flush all pending reads that subsequents reads
    Index: linux/include/asm-m68k/system.h
    ================================================== =================
    --- linux.orig/include/asm-m68k/system.h
    +++ linux/include/asm-m68k/system.h
    @@ -56,6 +56,7 @@ asmlinkage void resume(void);
    #define wmb() barrier()
    #define read_barrier_depends() ((void)0)
    #define set_mb(var, value) ({ (var) = (value); wmb(); })
    +#define nsec_barrier() barrier()

    #define smp_mb() barrier()
    #define smp_rmb() barrier()
    Index: linux/include/asm-m68knommu/system.h
    ================================================== =================
    --- linux.orig/include/asm-m68knommu/system.h
    +++ linux/include/asm-m68knommu/system.h
    @@ -105,6 +105,7 @@ asmlinkage void resume(void);
    #define rmb() asm volatile ("" : : :"memory")
    #define wmb() asm volatile ("" : : :"memory")
    #define set_mb(var, value) do { xchg(&var, value); } while (0)
    +#define nsec_barrier() barrier()

    #ifdef CONFIG_SMP
    #define smp_mb() mb()
    Index: linux/include/asm-mips/barrier.h
    ================================================== =================
    --- linux.orig/include/asm-mips/barrier.h
    +++ linux/include/asm-mips/barrier.h
    @@ -138,4 +138,6 @@
    #define smp_llsc_rmb() __asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")
    #define smp_llsc_wmb() __asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")

    +#define nsec_barrier() barrier()
    +
    #endif /* __ASM_BARRIER_H */
    Index: linux/include/asm-parisc/system.h
    ================================================== =================
    --- linux.orig/include/asm-parisc/system.h
    +++ linux/include/asm-parisc/system.h
    @@ -130,6 +130,7 @@ static inline void set_eiem(unsigned lon
    #define smp_wmb() mb()
    #define smp_read_barrier_depends() do { } while(0)
    #define read_barrier_depends() do { } while(0)
    +#define nsec_barrier() barrier()

    #define set_mb(var, value) do { var = value; mb(); } while (0)

    Index: linux/include/asm-powerpc/system.h
    ================================================== =================
    --- linux.orig/include/asm-powerpc/system.h
    +++ linux/include/asm-powerpc/system.h
    @@ -36,6 +36,7 @@
    #define rmb() __asm__ __volatile__ (__stringify(LWSYNC) : : : "memory")
    #define wmb() __asm__ __volatile__ ("sync" : : : "memory")
    #define read_barrier_depends() do { } while(0)
    +#define nsec_barrier() barrier()

    #define set_mb(var, value) do { var = value; mb(); } while (0)

    Index: linux/include/asm-ppc/system.h
    ================================================== =================
    --- linux.orig/include/asm-ppc/system.h
    +++ linux/include/asm-ppc/system.h
    @@ -30,6 +30,7 @@
    #define rmb() __asm__ __volatile__ ("sync" : : : "memory")
    #define wmb() __asm__ __volatile__ ("eieio" : : : "memory")
    #define read_barrier_depends() do { } while(0)
    +#define nsec_barrier() barrier()

    #define set_mb(var, value) do { var = value; mb(); } while (0)

    Index: linux/include/asm-s390/system.h
    ================================================== =================
    --- linux.orig/include/asm-s390/system.h
    +++ linux/include/asm-s390/system.h
    @@ -297,6 +297,7 @@ __cmpxchg(volatile void *ptr, unsigned l
    #define smp_read_barrier_depends() read_barrier_depends()
    #define smp_mb__before_clear_bit() smp_mb()
    #define smp_mb__after_clear_bit() smp_mb()
    +#define nsec_barrier() barrier()


    #define set_mb(var, value) do { var = value; mb(); } while (0)
    Index: linux/include/asm-sh/system.h
    ================================================== =================
    --- linux.orig/include/asm-sh/system.h
    +++ linux/include/asm-sh/system.h
    @@ -104,6 +104,7 @@ struct task_struct *__switch_to(struct t
    #define ctrl_barrier() __asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
    #define read_barrier_depends() do { } while(0)
    #endif
    +#define nsec_barrier() barrier()

    #ifdef CONFIG_SMP
    #define smp_mb() mb()
    Index: linux/include/asm-sparc/system.h
    ================================================== =================
    --- linux.orig/include/asm-sparc/system.h
    +++ linux/include/asm-sparc/system.h
    @@ -174,6 +174,7 @@ extern void fpsave(unsigned long *fpregs
    #define smp_rmb() __asm__ __volatile__("":::"memory")
    #define smp_wmb() __asm__ __volatile__("":::"memory")
    #define smp_read_barrier_depends() do { } while(0)
    +#define nsec_barrier() barrier()

    #define nop() __asm__ __volatile__ ("nop")

    Index: linux/include/asm-sparc64/system.h
    ================================================== =================
    --- linux.orig/include/asm-sparc64/system.h
    +++ linux/include/asm-sparc64/system.h
    @@ -74,6 +74,8 @@ do { __asm__ __volatile__("ba,pt %%xcc,

    #endif

    +#define nsec_barrier() barrier()
    +
    #define nop() __asm__ __volatile__ ("nop")

    #define read_barrier_depends() do { } while(0)
    Index: linux/include/asm-v850/system.h
    ================================================== =================
    --- linux.orig/include/asm-v850/system.h
    +++ linux/include/asm-v850/system.h
    @@ -73,6 +73,8 @@ static inline int irqs_disabled (void)
    #define smp_wmb() wmb ()
    #define smp_read_barrier_depends() read_barrier_depends()

    +#define nsec_barrier() barrier()
    +
    #define xchg(ptr, with) \
    ((__typeof__ (*(ptr)))__xchg ((unsigned long)(with), (ptr), sizeof (*(ptr))))

    Index: linux/include/asm-xtensa/system.h
    ================================================== =================
    --- linux.orig/include/asm-xtensa/system.h
    +++ linux/include/asm-xtensa/system.h
    @@ -89,6 +89,7 @@ static inline void disable_coprocessor(i
    #define mb() barrier()
    #define rmb() mb()
    #define wmb() mb()
    +#define nsec_barrier() barrier()

    #ifdef CONFIG_SMP
    #error smp_* not defined
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  5. [PATCH] [4/20] x86: Move nop declarations into separate include file


    Moving things out of processor.h is always a good thing.

    Also needed to avoid include loop in later patch.

    Signed-off-by: Andi Kleen

    ---
    include/asm-x86/nops.h | 90 ++++++++++++++++++++++++++++++++++++++++++++
    include/asm-x86/processor.h | 86 ------------------------------------------
    2 files changed, 91 insertions(+), 85 deletions(-)

    Index: linux/include/asm-x86/nops.h
    ================================================== =================
    --- /dev/null
    +++ linux/include/asm-x86/nops.h
    @@ -0,0 +1,90 @@
    +#ifndef _ASM_NOPS_H
    +#define _ASM_NOPS_H 1
    +
    +/* Define nops for use with alternative() */
    +
    +/* generic versions from gas */
    +#define GENERIC_NOP1 ".byte 0x90\n"
    +#define GENERIC_NOP2 ".byte 0x89,0xf6\n"
    +#define GENERIC_NOP3 ".byte 0x8d,0x76,0x00\n"
    +#define GENERIC_NOP4 ".byte 0x8d,0x74,0x26,0x00\n"
    +#define GENERIC_NOP5 GENERIC_NOP1 GENERIC_NOP4
    +#define GENERIC_NOP6 ".byte 0x8d,0xb6,0x00,0x00,0x00,0x00\n"
    +#define GENERIC_NOP7 ".byte 0x8d,0xb4,0x26,0x00,0x00,0x00,0x00\n"
    +#define GENERIC_NOP8 GENERIC_NOP1 GENERIC_NOP7
    +
    +/* Opteron 64bit nops */
    +#define K8_NOP1 GENERIC_NOP1
    +#define K8_NOP2 ".byte 0x66,0x90\n"
    +#define K8_NOP3 ".byte 0x66,0x66,0x90\n"
    +#define K8_NOP4 ".byte 0x66,0x66,0x66,0x90\n"
    +#define K8_NOP5 K8_NOP3 K8_NOP2
    +#define K8_NOP6 K8_NOP3 K8_NOP3
    +#define K8_NOP7 K8_NOP4 K8_NOP3
    +#define K8_NOP8 K8_NOP4 K8_NOP4
    +
    +/* K7 nops */
    +/* uses eax dependencies (arbitary choice) */
    +#define K7_NOP1 GENERIC_NOP1
    +#define K7_NOP2 ".byte 0x8b,0xc0\n"
    +#define K7_NOP3 ".byte 0x8d,0x04,0x20\n"
    +#define K7_NOP4 ".byte 0x8d,0x44,0x20,0x00\n"
    +#define K7_NOP5 K7_NOP4 ASM_NOP1
    +#define K7_NOP6 ".byte 0x8d,0x80,0,0,0,0\n"
    +#define K7_NOP7 ".byte 0x8D,0x04,0x05,0,0,0,0\n"
    +#define K7_NOP8 K7_NOP7 ASM_NOP1
    +
    +/* P6 nops */
    +/* uses eax dependencies (Intel-recommended choice) */
    +#define P6_NOP1 GENERIC_NOP1
    +#define P6_NOP2 ".byte 0x66,0x90\n"
    +#define P6_NOP3 ".byte 0x0f,0x1f,0x00\n"
    +#define P6_NOP4 ".byte 0x0f,0x1f,0x40,0\n"
    +#define P6_NOP5 ".byte 0x0f,0x1f,0x44,0x00,0\n"
    +#define P6_NOP6 ".byte 0x66,0x0f,0x1f,0x44,0x00,0\n"
    +#define P6_NOP7 ".byte 0x0f,0x1f,0x80,0,0,0,0\n"
    +#define P6_NOP8 ".byte 0x0f,0x1f,0x84,0x00,0,0,0,0\n"
    +
    +#if defined(CONFIG_MK8)
    +#define ASM_NOP1 K8_NOP1
    +#define ASM_NOP2 K8_NOP2
    +#define ASM_NOP3 K8_NOP3
    +#define ASM_NOP4 K8_NOP4
    +#define ASM_NOP5 K8_NOP5
    +#define ASM_NOP6 K8_NOP6
    +#define ASM_NOP7 K8_NOP7
    +#define ASM_NOP8 K8_NOP8
    +#elif defined(CONFIG_MK7)
    +#define ASM_NOP1 K7_NOP1
    +#define ASM_NOP2 K7_NOP2
    +#define ASM_NOP3 K7_NOP3
    +#define ASM_NOP4 K7_NOP4
    +#define ASM_NOP5 K7_NOP5
    +#define ASM_NOP6 K7_NOP6
    +#define ASM_NOP7 K7_NOP7
    +#define ASM_NOP8 K7_NOP8
    +#elif defined(CONFIG_M686) || defined(CONFIG_MPENTIUMII) || \
    + defined(CONFIG_MPENTIUMIII) || defined(CONFIG_MPENTIUMM) || \
    + defined(CONFIG_MCORE2) || defined(CONFIG_PENTIUM4)
    +#define ASM_NOP1 P6_NOP1
    +#define ASM_NOP2 P6_NOP2
    +#define ASM_NOP3 P6_NOP3
    +#define ASM_NOP4 P6_NOP4
    +#define ASM_NOP5 P6_NOP5
    +#define ASM_NOP6 P6_NOP6
    +#define ASM_NOP7 P6_NOP7
    +#define ASM_NOP8 P6_NOP8
    +#else
    +#define ASM_NOP1 GENERIC_NOP1
    +#define ASM_NOP2 GENERIC_NOP2
    +#define ASM_NOP3 GENERIC_NOP3
    +#define ASM_NOP4 GENERIC_NOP4
    +#define ASM_NOP5 GENERIC_NOP5
    +#define ASM_NOP6 GENERIC_NOP6
    +#define ASM_NOP7 GENERIC_NOP7
    +#define ASM_NOP8 GENERIC_NOP8
    +#endif
    +
    +#define ASM_NOP_MAX 8
    +
    +#endif
    Index: linux/include/asm-x86/processor.h
    ================================================== =================
    --- linux.orig/include/asm-x86/processor.h
    +++ linux/include/asm-x86/processor.h
    @@ -20,6 +20,7 @@ struct mm_struct;
    #include
    #include
    #include
    +#include
    #include
    #include
    #include
    @@ -674,91 +675,6 @@ extern int bootloader_type;
    extern char ignore_fpu_irq;
    #define cache_line_size() (boot_cpu_data.x86_cache_alignment)

    -/* generic versions from gas */
    -#define GENERIC_NOP1 ".byte 0x90\n"
    -#define GENERIC_NOP2 ".byte 0x89,0xf6\n"
    -#define GENERIC_NOP3 ".byte 0x8d,0x76,0x00\n"
    -#define GENERIC_NOP4 ".byte 0x8d,0x74,0x26,0x00\n"
    -#define GENERIC_NOP5 GENERIC_NOP1 GENERIC_NOP4
    -#define GENERIC_NOP6 ".byte 0x8d,0xb6,0x00,0x00,0x00,0x00\n"
    -#define GENERIC_NOP7 ".byte 0x8d,0xb4,0x26,0x00,0x00,0x00,0x00\n"
    -#define GENERIC_NOP8 GENERIC_NOP1 GENERIC_NOP7
    -
    -/* Opteron nops */
    -#define K8_NOP1 GENERIC_NOP1
    -#define K8_NOP2 ".byte 0x66,0x90\n"
    -#define K8_NOP3 ".byte 0x66,0x66,0x90\n"
    -#define K8_NOP4 ".byte 0x66,0x66,0x66,0x90\n"
    -#define K8_NOP5 K8_NOP3 K8_NOP2
    -#define K8_NOP6 K8_NOP3 K8_NOP3
    -#define K8_NOP7 K8_NOP4 K8_NOP3
    -#define K8_NOP8 K8_NOP4 K8_NOP4
    -
    -/* K7 nops */
    -/* uses eax dependencies (arbitary choice) */
    -#define K7_NOP1 GENERIC_NOP1
    -#define K7_NOP2 ".byte 0x8b,0xc0\n"
    -#define K7_NOP3 ".byte 0x8d,0x04,0x20\n"
    -#define K7_NOP4 ".byte 0x8d,0x44,0x20,0x00\n"
    -#define K7_NOP5 K7_NOP4 ASM_NOP1
    -#define K7_NOP6 ".byte 0x8d,0x80,0,0,0,0\n"
    -#define K7_NOP7 ".byte 0x8D,0x04,0x05,0,0,0,0\n"
    -#define K7_NOP8 K7_NOP7 ASM_NOP1
    -
    -/* P6 nops */
    -/* uses eax dependencies (Intel-recommended choice) */
    -#define P6_NOP1 GENERIC_NOP1
    -#define P6_NOP2 ".byte 0x66,0x90\n"
    -#define P6_NOP3 ".byte 0x0f,0x1f,0x00\n"
    -#define P6_NOP4 ".byte 0x0f,0x1f,0x40,0\n"
    -#define P6_NOP5 ".byte 0x0f,0x1f,0x44,0x00,0\n"
    -#define P6_NOP6 ".byte 0x66,0x0f,0x1f,0x44,0x00,0\n"
    -#define P6_NOP7 ".byte 0x0f,0x1f,0x80,0,0,0,0\n"
    -#define P6_NOP8 ".byte 0x0f,0x1f,0x84,0x00,0,0,0,0\n"
    -
    -#ifdef CONFIG_MK7
    -#define ASM_NOP1 K7_NOP1
    -#define ASM_NOP2 K7_NOP2
    -#define ASM_NOP3 K7_NOP3
    -#define ASM_NOP4 K7_NOP4
    -#define ASM_NOP5 K7_NOP5
    -#define ASM_NOP6 K7_NOP6
    -#define ASM_NOP7 K7_NOP7
    -#define ASM_NOP8 K7_NOP8
    -#elif defined(CONFIG_M686) || defined(CONFIG_MPENTIUMII) || \
    - defined(CONFIG_MPENTIUMIII) || defined(CONFIG_MPENTIUMM) || \
    - defined(CONFIG_MCORE2) || defined(CONFIG_PENTIUM4) || \
    - defined(CONFIG_MPSC)
    -#define ASM_NOP1 P6_NOP1
    -#define ASM_NOP2 P6_NOP2
    -#define ASM_NOP3 P6_NOP3
    -#define ASM_NOP4 P6_NOP4
    -#define ASM_NOP5 P6_NOP5
    -#define ASM_NOP6 P6_NOP6
    -#define ASM_NOP7 P6_NOP7
    -#define ASM_NOP8 P6_NOP8
    -#elif defined(CONFIG_MK8) || defined(CONFIG_X86_64)
    -#define ASM_NOP1 K8_NOP1
    -#define ASM_NOP2 K8_NOP2
    -#define ASM_NOP3 K8_NOP3
    -#define ASM_NOP4 K8_NOP4
    -#define ASM_NOP5 K8_NOP5
    -#define ASM_NOP6 K8_NOP6
    -#define ASM_NOP7 K8_NOP7
    -#define ASM_NOP8 K8_NOP8
    -#else
    -#define ASM_NOP1 GENERIC_NOP1
    -#define ASM_NOP2 GENERIC_NOP2
    -#define ASM_NOP3 GENERIC_NOP3
    -#define ASM_NOP4 GENERIC_NOP4
    -#define ASM_NOP5 GENERIC_NOP5
    -#define ASM_NOP6 GENERIC_NOP6
    -#define ASM_NOP7 GENERIC_NOP7
    -#define ASM_NOP8 GENERIC_NOP8
    -#endif
    -
    -#define ASM_NOP_MAX 8
    -
    #define HAVE_ARCH_PICK_MMAP_LAYOUT 1
    #define ARCH_HAS_PREFETCHW
    #define ARCH_HAS_SPINLOCK_PREFETCH
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  6. [PATCH] [20/20] x86: Print which shared library/executable faulted in segfault etc. messages


    They now look like

    hal-resmgr[13791]: segfault at 3c rip 2b9c8caec182 rsp 7fff1e825d30 error 4 in libacl.so.1.1.0[2b9c8caea000+6000]

    This makes it easier to pinpoint bugs to specific libraries.

    And printing the offset into a mapping also always allows to find the
    correct fault point in a library even with randomized mappings. Previously
    there was no way to actually find the correct code address inside
    the randomized mapping.

    Relies on earlier patch to shorten the printk formats.

    They are often now longer than 80 characters, but I think that's worth
    it.

    Patch for i386 and x86-64.

    Signed-off-by: Andi Kleen

    ---
    arch/x86/kernel/signal_32.c | 7 +++++--
    arch/x86/kernel/signal_64.c | 7 +++++--
    arch/x86/kernel/traps_32.c | 7 +++++--
    arch/x86/mm/fault_32.c | 4 +++-
    include/linux/mm.h | 1 +
    mm/memory.c | 27 +++++++++++++++++++++++++++
    6 files changed, 46 insertions(+), 7 deletions(-)

    Index: linux/include/linux/mm.h
    ================================================== =================
    --- linux.orig/include/linux/mm.h
    +++ linux/include/linux/mm.h
    @@ -1145,6 +1145,7 @@ extern int randomize_va_space;
    #endif

    const char * arch_vma_name(struct vm_area_struct *vma);
    +void print_vma_addr(char *prefix, unsigned long rip);

    struct page *sparse_mem_map_populate(unsigned long pnum, int nid);
    pgd_t *vmemmap_pgd_populate(unsigned long addr, int node);
    Index: linux/mm/memory.c
    ================================================== =================
    --- linux.orig/mm/memory.c
    +++ linux/mm/memory.c
    @@ -2746,3 +2746,30 @@ int access_process_vm(struct task_struct

    return buf - old_buf;
    }
    +
    +/*
    + * Print the name of a VMA.
    + */
    +void print_vma_addr(char *prefix, unsigned long ip)
    +{
    + struct mm_struct *mm = current->mm;
    + struct vm_area_struct *vma;
    + down_read(&mm->mmap_sem);
    + vma = find_vma(mm, ip);
    + if (vma && vma->vm_file) {
    + struct file *f = vma->vm_file;
    + char *buf = (char *)__get_free_page(GFP_KERNEL);
    + if (buf) {
    + char *p, *s;
    + p = d_path(f->f_dentry, f->f_vfsmnt, buf, PAGE_SIZE);
    + s = strrchr(p, '/');
    + if (s)
    + p = s+1;
    + printk("%s%s[%lx+%lx]", prefix, p,
    + vma->vm_start,
    + vma->vm_end - vma->vm_start);
    + free_page((unsigned long)buf);
    + }
    + }
    + up_read(&current->mm->mmap_sem);
    +}
    Index: linux/arch/x86/kernel/signal_32.c
    ================================================== =================
    --- linux.orig/arch/x86/kernel/signal_32.c
    +++ linux/arch/x86/kernel/signal_32.c
    @@ -198,12 +198,15 @@ asmlinkage int sys_sigreturn(unsigned lo
    return ax;

    badframe:
    - if (show_unhandled_signals && printk_ratelimit())
    + if (show_unhandled_signals && printk_ratelimit()) {
    printk("%s%s[%d] bad frame in sigreturn frame:%p ip:%lx"
    - " sp:%lx oeax:%lx\n",
    + " sp:%lx oeax:%lx",
    task_pid_nr(current) > 1 ? KERN_INFO : KERN_EMERG,
    current->comm, task_pid_nr(current), frame, regs->ip,
    regs->sp, regs->orig_ax);
    + print_vma_addr(" in ", regs->ip);
    + printk("\n");
    + }

    force_sig(SIGSEGV, current);
    return 0;
    Index: linux/arch/x86/kernel/signal_64.c
    ================================================== =================
    --- linux.orig/arch/x86/kernel/signal_64.c
    +++ linux/arch/x86/kernel/signal_64.c
    @@ -481,9 +481,12 @@ do_notify_resume(struct pt_regs *regs, v
    void signal_fault(struct pt_regs *regs, void __user *frame, char *where)
    {
    struct task_struct *me = current;
    - if (show_unhandled_signals && printk_ratelimit())
    - printk("%s[%d] bad frame in %s frame:%p ip:%lx sp:%lx orax:%lx\n",
    + if (show_unhandled_signals && printk_ratelimit()) {
    + printk("%s[%d] bad frame in %s frame:%p ip:%lx sp:%lx orax:%lx",
    me->comm,me->pid,where,frame,regs->ip,regs->sp,regs->orig_ax);
    + print_vma_addr(" in ", regs->ip);
    + printk("\n");
    + }

    force_sig(SIGSEGV, me);
    }
    Index: linux/arch/x86/kernel/traps_32.c
    ================================================== =================
    --- linux.orig/arch/x86/kernel/traps_32.c
    +++ linux/arch/x86/kernel/traps_32.c
    @@ -673,11 +673,14 @@ void __kprobes do_general_protection(str
    current->thread.error_code = error_code;
    current->thread.trap_no = 13;
    if (show_unhandled_signals && unhandled_signal(current, SIGSEGV) &&
    - printk_ratelimit())
    + printk_ratelimit()) {
    printk(KERN_INFO
    - "%s[%d] general protection ip:%lx sp:%lx error:%lx\n",
    + "%s[%d] general protection ip:%lx sp:%lx error:%lx",
    current->comm, task_pid_nr(current),
    regs->ip, regs->sp, error_code);
    + print_vma_addr(" in ", regs->ip);
    + printk("\n");
    + }

    force_sig(SIGSEGV, current);
    return;
    Index: linux/arch/x86/mm/fault_32.c
    ================================================== =================
    --- linux.orig/arch/x86/mm/fault_32.c
    +++ linux/arch/x86/mm/fault_32.c
    @@ -550,10 +550,12 @@ bad_area_nosemaphore:
    if (show_unhandled_signals && unhandled_signal(tsk, SIGSEGV) &&
    printk_ratelimit()) {
    printk("%s%s[%d]: segfault at %lx ip %08lx "
    - "sp %08lx error %lx\n",
    + "sp %08lx error %lx",
    task_pid_nr(tsk) > 1 ? KERN_INFO : KERN_EMERG,
    tsk->comm, task_pid_nr(tsk), address, regs->ip,
    regs->sp, error_code);
    + print_vma_addr(" in ", regs->ip);
    + printk("\n");
    }
    tsk->thread.cr2 = address;
    /* Kernel addresses are always protection faults */
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  7. Re: [PATCH] [20/20] x86: Print which shared library/executable faulted in segfault etc. messages

    Andi Kleen a écrit :
    > They now look like
    >
    > hal-resmgr[13791]: segfault at 3c rip 2b9c8caec182 rsp 7fff1e825d30 error 4 in libacl.so.1.1.0[2b9c8caea000+6000]
    >
    > This makes it easier to pinpoint bugs to specific libraries.
    >
    > And printing the offset into a mapping also always allows to find the
    > correct fault point in a library even with randomized mappings. Previously
    > there was no way to actually find the correct code address inside
    > the randomized mapping.
    >
    > Relies on earlier patch to shorten the printk formats.
    >
    > They are often now longer than 80 characters, but I think that's worth
    > it.
    >
    > Patch for i386 and x86-64.
    >
    > Signed-off-by: Andi Kleen
    >
    > ---
    > arch/x86/kernel/signal_32.c | 7 +++++--
    > arch/x86/kernel/signal_64.c | 7 +++++--
    > arch/x86/kernel/traps_32.c | 7 +++++--
    > arch/x86/mm/fault_32.c | 4 +++-
    > include/linux/mm.h | 1 +
    > mm/memory.c | 27 +++++++++++++++++++++++++++
    > 6 files changed, 46 insertions(+), 7 deletions(-)
    >
    > Index: linux/include/linux/mm.h
    > ================================================== =================
    > --- linux.orig/include/linux/mm.h
    > +++ linux/include/linux/mm.h
    > @@ -1145,6 +1145,7 @@ extern int randomize_va_space;
    > #endif
    >
    > const char * arch_vma_name(struct vm_area_struct *vma);
    > +void print_vma_addr(char *prefix, unsigned long rip);
    >
    > struct page *sparse_mem_map_populate(unsigned long pnum, int nid);
    > pgd_t *vmemmap_pgd_populate(unsigned long addr, int node);
    > Index: linux/mm/memory.c
    > ================================================== =================
    > --- linux.orig/mm/memory.c
    > +++ linux/mm/memory.c
    > @@ -2746,3 +2746,30 @@ int access_process_vm(struct task_struct
    >
    > return buf - old_buf;
    > }
    > +
    > +/*
    > + * Print the name of a VMA.
    > + */
    > +void print_vma_addr(char *prefix, unsigned long ip)
    > +{
    > + struct mm_struct *mm = current->mm;
    > + struct vm_area_struct *vma;
    > + down_read(&mm->mmap_sem);
    > + vma = find_vma(mm, ip);
    > + if (vma && vma->vm_file) {
    > + struct file *f = vma->vm_file;
    > + char *buf = (char *)__get_free_page(GFP_KERNEL);
    > + if (buf) {
    > + char *p, *s;
    > + p = d_path(f->f_dentry, f->f_vfsmnt, buf, PAGE_SIZE);


    d_path() can returns an error. You should add :

    if (IS_ERR(p))
    p = "?";

    > + s = strrchr(p, '/');
    > + if (s)
    > + p = s+1;
    > + printk("%s%s[%lx+%lx]", prefix, p,
    > + vma->vm_start,
    > + vma->vm_end - vma->vm_start);
    > + free_page((unsigned long)buf);
    > + }
    > + }
    > + up_read(&current->mm->mmap_sem);
    > +}


    Thank you

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  8. Re: [PATCH] [1/20] x86: Make ptrace.h safe to include from assembler code


    * Andi Kleen wrote:

    > Index: linux/include/asm-x86/ptrace-abi.h
    > ================================================== =================
    > --- linux.orig/include/asm-x86/ptrace-abi.h
    > +++ linux/include/asm-x86/ptrace-abi.h
    > @@ -80,6 +80,7 @@
    >
    > #define PTRACE_SINGLEBLOCK 33 /* resume execution until next branch */
    >
    > +#ifndef __ASSEMBLY__


    hm, this patch misses a rationale - what assembly code includes
    ptrace-abi.h directly or indirectly? Did you see any build breakage with
    x86.git that requires this? (if yes then please send me the .config)

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  9. Re: [PATCH] [9/20] x86: Don't use oops_begin in 64bit mce code


    * Andi Kleen wrote:

    > It is not really useful to lock machine checks against oopses. And
    > machine checks normally don't nest, so they don't need their own
    > locking. Just call bust_spinlock/console_verbose directly.


    is this in response to any particular incident you've seen?

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  10. Re: [PATCH] [5/20] x86: Introduce nsec_barrier()


    * Andi Kleen wrote:

    > nsec_barrier() is a new barrier primitive that stops RDTSC speculation
    > to avoid races with timer interrupts on other CPUs.
    >
    > Add it to all architectures. Except for x86 it is a nop right now. I
    > only tested x86, but it's a very simple change.
    >
    > On x86 it expands either to LFENCE (for Intel CPUs) or MFENCE (for AMD
    > CPUs) which stops RDTSC on all currently known microarchitectures that
    > implement SSE. On CPUs without SSE there is generally no RDTSC
    > speculation.


    i've picked up your rdtsc patches into x86.git but have simplified it:
    there's no nsec_barrier() anymore - rdtsc() is always synchronous.
    MFENCE/LFENCE is fast enough. Open-coding such barriers almost always
    leads to needless trouble. Please check the next x86.git tree.

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  11. Re: [PATCH] [11/20] x86: Use the correct cpuid method to detect MWAIT support for C states


    * Andi Kleen wrote:

    > +static int mwait_usable(const struct cpuinfo_x86 *c)
    > +{
    > + if (force_mwait)
    > + return 1;
    > + /* Any C1 states supported? */
    > + return c->cpuid_level >= 5 && ((cpuid_edx(5) >> 4) & 0xf) > 0;
    > +}
    > +
    > void __cpuinit select_idle_routine(const struct cpuinfo_x86 *c)
    > {
    > - if (cpu_has(c, X86_FEATURE_MWAIT)) {
    > + if (cpu_has(c, X86_FEATURE_MWAIT) && mwait_usable(c)) {
    > printk("monitor/mwait feature present.\n");


    hm, why not clear FEATURE_MWAIT if it's "not usable"? That's the
    standard approach we do for CPU features that do not work.

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  12. Re: [PATCH] [10/20] i386: Move MWAIT idle check to generic CPU initialization


    * Andi Kleen wrote:

    > Previously it was only run for Intel CPUs, but AMD Fam10h implements
    > MWAIT too.
    >
    > This matches 64bit behaviour.
    >
    > Signed-off-by: Andi Kleen


    thanks, applied.

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  13. Re: [PATCH] [12/20] x86: Use a per cpu timer for correctable machine check checking


    * Andi Kleen wrote:

    > Previously the code used a single timer that then used
    > smp_call_function to interrupt all CPUs while the original CPU was
    > waiting for them.
    >
    > But it is better / more real time and more power friendly to simply
    > run individual timers on each CPU so they all do this independently.
    >
    > This way no single CPU has to wait for all others.


    i think we should unify this code first and provide it on 32-bit as
    well.

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  14. Re: [PATCH] [19/20] x86: Use shorter addresses in i386 segfault printks


    * Andi Kleen wrote:

    > x86: Use shorter addresses in i386 segfault printks


    hm, why? It's pretty well-established that we print addresses 8 char
    wide on 32-bit.

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  15. Re: [PATCH] [15/20] x86: Move X86_FEATURE_CONSTANT_TSC into early cpu feature detection


    * Andi Kleen wrote:

    > Need this in the next patch in time_init and that happens early.
    >
    > This includes a minor fix on i386 where early_intel_workarounds()
    > [which is now called early_init_intel] really executes early as the
    > comments say.


    thanks, applied. I'll wait for Thomas to comment on the TSC bits though.
    (but as long as we carry this patch in x86.git it should make your
    future patching efforts easier.)

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  16. Re: [PATCH] [20/20] x86: Print which shared library/executable faulted in segfault etc. messages


    * Andi Kleen wrote:

    > They now look like
    >
    > hal-resmgr[13791]: segfault at 3c rip 2b9c8caec182 rsp 7fff1e825d30
    > error 4 in libacl.so.1.1.0[2b9c8caea000+6000]
    >
    > This makes it easier to pinpoint bugs to specific libraries.


    yep, that's really useful.

    I think the patch needs one more iteration though:

    > And printing the offset into a mapping also always allows to find the
    > correct fault point in a library even with randomized mappings. Previously
    > there was no way to actually find the correct code address inside
    > the randomized mapping.
    >
    > Relies on earlier patch to shorten the printk formats.
    >
    > They are often now longer than 80 characters, but I think that's worth
    > it.


    why not make it multi-line? that way the %lx hack wouldnt be needed
    either.

    > +void print_vma_addr(char *prefix, unsigned long ip)
    > +{
    > + struct mm_struct *mm = current->mm;
    > + struct vm_area_struct *vma;
    > + down_read(&mm->mmap_sem);
    > + vma = find_vma(mm, ip);


    grumble. Proper CodingStyle please.

    > + if (buf) {
    > + char *p, *s;
    > + p = d_path(f->f_dentry, f->f_vfsmnt, buf, PAGE_SIZE);


    this one too.

    > + if (show_unhandled_signals && printk_ratelimit()) {
    > + printk("%s[%d] bad frame in %s frame:%p ip:%lx sp:%lx orax:%lx",
    > me->comm,me->pid,where,frame,regs->ip,regs->sp,regs->orig_ax);


    and this.

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  17. Re: [PATCH] [9/20] x86: Don't use oops_begin in 64bit mce code

    On Thursday 03 January 2008 11:39:12 Ingo Molnar wrote:
    >
    > * Andi Kleen wrote:
    >
    > > It is not really useful to lock machine checks against oopses. And
    > > machine checks normally don't nest, so they don't need their own
    > > locking. Just call bust_spinlock/console_verbose directly.

    >
    > is this in response to any particular incident you've seen?



    No, that was a preparatory patch for the "use 64bit machine check code
    for 32bit kernels" because 32bit doesn't have oops_begin(), but it is
    useful on its own.

    -Andi
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  18. Re: [PATCH] [5/20] x86: Introduce nsec_barrier()

    On Thursday 03 January 2008 11:47:54 Ingo Molnar wrote:
    >
    > * Andi Kleen wrote:
    >
    > > nsec_barrier() is a new barrier primitive that stops RDTSC speculation
    > > to avoid races with timer interrupts on other CPUs.
    > >
    > > Add it to all architectures. Except for x86 it is a nop right now. I
    > > only tested x86, but it's a very simple change.
    > >
    > > On x86 it expands either to LFENCE (for Intel CPUs) or MFENCE (for AMD
    > > CPUs) which stops RDTSC on all currently known microarchitectures that
    > > implement SSE. On CPUs without SSE there is generally no RDTSC
    > > speculation.

    >
    > i've picked up your rdtsc patches into x86.git but have simplified it:
    > there's no nsec_barrier() anymore - rdtsc() is always synchronous.
    > MFENCE/LFENCE is fast enough. Open-coding such barriers almost always
    > leads to needless trouble. Please check the next x86.git tree.


    That's most likely wrong unless you added two barriers -- the barriers
    are strictly need to be before and after RDTSC.

    I still think having the open barrier is the better approach here.

    It's also useful for performance measurements because it allows
    a cheap way to measure a specific region with RDTSC

    -Andi
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  19. Re: [PATCH] [1/20] x86: Make ptrace.h safe to include from assembler code

    On Thursday 03 January 2008 10:54:52 Ingo Molnar wrote:
    >
    > * Andi Kleen wrote:
    >
    > > Index: linux/include/asm-x86/ptrace-abi.h
    > > ================================================== =================
    > > --- linux.orig/include/asm-x86/ptrace-abi.h
    > > +++ linux/include/asm-x86/ptrace-abi.h
    > > @@ -80,6 +80,7 @@
    > >
    > > #define PTRACE_SINGLEBLOCK 33 /* resume execution until next branch */
    > >
    > > +#ifndef __ASSEMBLY__

    >
    > hm, this patch misses a rationale - what assembly code includes
    > ptrace-abi.h directly or indirectly? Did you see any build breakage with
    > x86.git that requires this? (if yes then please send me the .config)


    It's needed for the dwarf2 unwinder, but imho useful on its own.

    -Andi


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  20. Re: [PATCH] [11/20] x86: Use the correct cpuid method to detect MWAIT support for C states

    On Thursday 03 January 2008 11:45:26 Ingo Molnar wrote:
    >
    > * Andi Kleen wrote:
    >
    > > +static int mwait_usable(const struct cpuinfo_x86 *c)
    > > +{
    > > + if (force_mwait)
    > > + return 1;
    > > + /* Any C1 states supported? */
    > > + return c->cpuid_level >= 5 && ((cpuid_edx(5) >> 4) & 0xf) > 0;
    > > +}
    > > +
    > > void __cpuinit select_idle_routine(const struct cpuinfo_x86 *c)
    > > {
    > > - if (cpu_has(c, X86_FEATURE_MWAIT)) {
    > > + if (cpu_has(c, X86_FEATURE_MWAIT) && mwait_usable(c)) {
    > > printk("monitor/mwait feature present.\n");

    >
    > hm, why not clear FEATURE_MWAIT if it's "not usable"? That's the
    > standard approach we do for CPU features that do not work.


    Well it works, just in a unexpected way not useful to the kernel.

    At least on AMD there is a bit to enable it for ring 3 too, so
    in theory someone could use it anyways.

    -Andi

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

+ Reply to Thread
Page 1 of 2 1 2 LastLast