[PATCH 0/3] ftrace: code consolidation - Kernel

This is a discussion on [PATCH 0/3] ftrace: code consolidation - Kernel ; [ For 2.6.29 ] The first two patches makes ftrace function calling and other sensitive tracing use a macro to handle preemption disabling. This removes a lot of duplicate code that works on preventing the scheduler from infinite recursion. The ...

+ Reply to Thread
Results 1 to 12 of 12

Thread: [PATCH 0/3] ftrace: code consolidation

  1. [PATCH 0/3] ftrace: code consolidation

    [ For 2.6.29 ]

    The first two patches makes ftrace function calling and other
    sensitive tracing use a macro to handle preemption disabling.
    This removes a lot of duplicate code that works on preventing
    the scheduler from infinite recursion.

    The last patch sets an easy way for the user to either use
    irq disabling or preemption disabing for the function tracer.

    Peter Zijlstra noticed a bit of a trace that was not showing
    up due to lost traces caused by interrupts when the function
    tracer was running.

    Due to the sensitive nature of the function tracer, it can
    not allow for recursive tracing, so it needs to disable
    recursion while it records the trace. But this also means
    that without disabling interrupts, any interrupt that happens
    while the trace is happening will be lost from the trace
    itself.

    -- Steve

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. [PATCH 3/3] ftrace: function tracer with irqs disabled

    To help with performance, I set the ftracer to not disable interrupts,
    and only to disable preemption. If an interrupt occurred, it would not
    be traced, because the function tracer protects itself from recursion.
    This may be faster, but the trace output might miss some traces.

    This patch makes the fuction trace disable interrupts, but it also
    adds a runtime feature to disable preemption instead. It does this by
    having two different tracer functions. When the function tracer is
    enabled, it will check to see which version is requested (irqs disabled
    or preemption disabled). Then it will use the corresponding function
    as the tracer.

    Irq disabling is the default behaviour, but if the user wants better
    performance, with the chance of missing traces, then they can choose
    the preempt disabled version.

    Running hackbench 3 times with the irqs disabled and 3 times with
    the preempt disabled function tracer yielded:

    tracing type times entries recorded
    ------------ -------- ----------------
    irq disabled 43.393 166433066
    43.282 166172618
    43.298 166256704

    preempt disabled 38.969 159871710
    38.943 159972935
    39.325 161056510


    Average:

    irqs disabled: 43.324 166287462
    preempt disabled: 39.079 160300385

    preempt is 10.8 percent faster than irqs disabled.

    I wrote a patch to count function trace recursion and reran hackbench.

    With irq disabled: 1,150 times the function tracer did not trace due to
    recursion.
    with preempt disabled: 5,117,718 times.

    The thousand times with irq disabled could be due to NMIs, or simply a case
    where it called a function that was not protected by notrace.

    But we also see that a large amount of the trace is lost with the
    preempt version.

    Signed-off-by: Steven Rostedt
    ---
    kernel/trace/trace.c | 40 +++++++++++++++++++++++++++++++++++++++-
    kernel/trace/trace.h | 1 +
    2 files changed, 40 insertions(+), 1 deletion(-)

    Index: linux-tip.git/kernel/trace/trace.c
    ================================================== =================
    --- linux-tip.git.orig/kernel/trace/trace.c 2008-11-03 19:05:24.000000000 -0500
    +++ linux-tip.git/kernel/trace/trace.c 2008-11-03 22:29:17.000000000 -0500
    @@ -235,6 +235,7 @@ static const char *trace_options[] = {
    "stacktrace",
    "sched-tree",
    "ftrace_printk",
    + "ftrace_preempt",
    NULL
    };

    @@ -880,7 +881,7 @@ ftrace_special(unsigned long arg1, unsig

    #ifdef CONFIG_FUNCTION_TRACER
    static void
    -function_trace_call(unsigned long ip, unsigned long parent_ip)
    +function_trace_call_preempt_only(unsigned long ip, unsigned long parent_ip)
    {
    struct trace_array *tr = &global_trace;
    struct trace_array_cpu *data;
    @@ -906,6 +907,37 @@ function_trace_call(unsigned long ip, un
    ftrace_preempt_enable(resched);
    }

    +static void
    +function_trace_call(unsigned long ip, unsigned long parent_ip)
    +{
    + struct trace_array *tr = &global_trace;
    + struct trace_array_cpu *data;
    + unsigned long flags;
    + long disabled;
    + int cpu;
    + int pc;
    +
    + if (unlikely(!ftrace_function_enabled))
    + return;
    +
    + /*
    + * Need to use raw, since this must be called before the
    + * recursive protection is performed.
    + */
    + raw_local_irq_save(flags);
    + cpu = raw_smp_processor_id();
    + data = tr->data[cpu];
    + disabled = atomic_inc_return(&data->disabled);
    +
    + if (likely(disabled == 1)) {
    + pc = preempt_count();
    + trace_function(tr, data, ip, parent_ip, flags, pc);
    + }
    +
    + atomic_dec(&data->disabled);
    + raw_local_irq_restore(flags);
    +}
    +
    static struct ftrace_ops trace_ops __read_mostly =
    {
    .func = function_trace_call,
    @@ -914,6 +946,12 @@ static struct ftrace_ops trace_ops __rea
    void tracing_start_function_trace(void)
    {
    ftrace_function_enabled = 0;
    +
    + if (trace_flags & TRACE_ITER_PREEMPTONLY)
    + trace_ops.func = function_trace_call_preempt_only;
    + else
    + trace_ops.func = function_trace_call;
    +
    register_ftrace_function(&trace_ops);
    if (tracer_enabled)
    ftrace_function_enabled = 1;
    Index: linux-tip.git/kernel/trace/trace.h
    ================================================== =================
    --- linux-tip.git.orig/kernel/trace/trace.h 2008-11-03 18:49:38.000000000 -0500
    +++ linux-tip.git/kernel/trace/trace.h 2008-11-03 19:15:04.000000000 -0500
    @@ -415,6 +415,7 @@ enum trace_iterator_flags {
    TRACE_ITER_STACKTRACE = 0x100,
    TRACE_ITER_SCHED_TREE = 0x200,
    TRACE_ITER_PRINTK = 0x400,
    + TRACE_ITER_PREEMPTONLY = 0x800,
    };

    extern struct tracer nop_trace;

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. [PATCH 1/3] ftrace: ftrace_preempt_disable

    Parts of the tracer needs to be careful about schedule recursion.
    If the NEED_RESCHED flag is set, a preempt_enable will call schedule.
    Inside the schedule function, the NEED_RESCHED flag is cleared.

    The problem arises when a trace happens in the schedule function but before
    NEED_RESCHED is cleared. The race is as follows:

    schedule()
    >> tracer called


    trace_function()
    preempt_disable()
    [ record trace ]
    preempt_enable() <<- here's the issue.

    [check NEED_RESCHED]
    schedule()
    [ Repeat the above, over and over again ]

    The naive approach is simply to use preempt_enable_no_schedule instead.
    The problem with that approach is that, although we solve the schedule
    recursion issue, we now might lose a preemption check when not in the
    schedule function.

    trace_function()
    preempt_disable()
    [ record trace ]
    [Interrupt comes in and sets NEED_RESCHED]
    preempt_enable_no_resched()
    [continue without scheduling]

    The way ftrace handles this problem is with the following approach:

    int resched;

    resched = need_resched();
    preempt_disable_notrace();
    [record trace]
    if (resched)
    preempt_enable_no_sched_notrace();
    else
    preempt_enable_notrace();

    This may seem like the opposite of what we want. If resched is set
    then we call the "no_sched" version?? The reason we do this is because
    if NEED_RESCHED is set before we disable preemption, there's two reasons
    for that:

    1) we are in an atomic code path
    2) we are already on our way to the schedule function, and maybe even
    in the schedule function, but have yet to clear the flag.

    Both the above cases we do not want to schedule.

    This solution has already been implemented within the ftrace infrastructure.
    But the problem is that it has been implemented several times. This patch
    encapsulates this code to two nice functions.

    resched = ftrace_preempt_disable();
    [ record trace]
    ftrace_preempt_enable(resched);

    This way the tracers do not need to worry about getting it right.

    Signed-off-by: Steven Rostedt
    ---
    kernel/trace/trace.h | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
    1 file changed, 48 insertions(+)

    Index: linux-tip.git/kernel/trace/trace.h
    ================================================== =================
    --- linux-tip.git.orig/kernel/trace/trace.h 2008-11-03 18:09:05.000000000 -0500
    +++ linux-tip.git/kernel/trace/trace.h 2008-11-03 18:27:38.000000000 -0500
    @@ -419,4 +419,52 @@ enum trace_iterator_flags {

    extern struct tracer nop_trace;

    +/**
    + * ftrace_preempt_disable - disable preemption scheduler safe
    + *
    + * When tracing can happen inside the scheduler, there exists
    + * cases that the tracing might happen before the need_resched
    + * flag is checked. If this happens and the tracer calls
    + * preempt_enable (after a disable), a schedule might take place
    + * causing an infinite recursion.
    + *
    + * To prevent this, we read the need_recshed flag before
    + * disabling preemption. When we want to enable preemption we
    + * check the flag, if it is set, then we call preempt_enable_no_resched.
    + * Otherwise, we call preempt_enable.
    + *
    + * The rational for doing the above is that if need resched is set
    + * and we have yet to reschedule, we are either in an atomic location
    + * (where we do not need to check for scheduling) or we are inside
    + * the scheduler and do not want to resched.
    + */
    +static inline int ftrace_preempt_disable(void)
    +{
    + int resched;
    +
    + resched = need_resched();
    + preempt_disable_notrace();
    +
    + return resched;
    +}
    +
    +/**
    + * ftrace_preempt_enable - enable preemption scheduler safe
    + * @resched: the return value from ftrace_preempt_disable
    + *
    + * This is a scheduler safe way to enable preemption and not miss
    + * any preemption checks. The disabled saved the state of preemption.
    + * If resched is set, then we were either inside an atomic or
    + * are inside the scheduler (we would have already scheduled
    + * otherwise). In this case, we do not want to call normal
    + * preempt_enable, but preempt_enable_no_resched instead.
    + */
    +static inline void ftrace_preempt_enable(int resched)
    +{
    + if (resched)
    + preempt_enable_no_resched_notrace();
    + else
    + preempt_enable_notrace();
    +}
    +
    #endif /* _LINUX_KERNEL_TRACE_H */

    --
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  4. Re: [PATCH 3/3] ftrace: function tracer with irqs disabled


    * Steven Rostedt wrote:

    > Running hackbench 3 times with the irqs disabled and 3 times with
    > the preempt disabled function tracer yielded:
    >
    > tracing type times entries recorded
    > ------------ -------- ----------------
    > irq disabled 43.393 166433066
    > 43.282 166172618
    > 43.298 166256704
    >
    > preempt disabled 38.969 159871710
    > 38.943 159972935
    > 39.325 161056510


    your numbers might be correct, but i found that hackbench is not
    reliable boot-to-boot - it can easily produce 10% systematic noise or
    more. (perhaps depending on how the various socket data structures
    happen to be allocated)

    the really conclusive way to test this would be to add a hack that
    either does preempt disable or irqs disable, depending on a runtime
    flag - and then observe how hackbench performance reacts to the value
    of that flag.

    note that preempt-disable will also produce less trace entries,
    especially in very irq-rich workloads. Hence it will be "faster".

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  5. Re: [PATCH 3/3] ftrace: function tracer with irqs disabled


    On Tue, 2008-11-04 at 09:07 +0100, Ingo Molnar wrote:
    > * Steven Rostedt wrote:
    >
    > > Running hackbench 3 times with the irqs disabled and 3 times with
    > > the preempt disabled function tracer yielded:
    > >
    > > tracing type times entries recorded
    > > ------------ -------- ----------------
    > > irq disabled 43.393 166433066
    > > 43.282 166172618
    > > 43.298 166256704
    > >
    > > preempt disabled 38.969 159871710
    > > 38.943 159972935
    > > 39.325 161056510

    >
    > your numbers might be correct, but i found that hackbench is not
    > reliable boot-to-boot

    I found that, too. But if I kill most background processes before testing,
    hackbench result looks quite stable.

    > - it can easily produce 10% systematic noise or
    > more. (perhaps depending on how the various socket data structures
    > happen to be allocated)
    >
    > the really conclusive way to test this would be to add a hack that
    > either does preempt disable or irqs disable, depending on a runtime
    > flag - and then observe how hackbench performance reacts to the value
    > of that flag.
    >
    > note that preempt-disable will also produce less trace entries,
    > especially in very irq-rich workloads. Hence it will be "faster".
    >
    > Ingo
    > --
    > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    > the body of a message to majordomo@vger.kernel.org
    > More majordomo info at http://vger.kernel.org/majordomo-info.html
    > Please read the FAQ at http://www.tux.org/lkml/


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  6. Re: [PATCH 3/3] ftrace: function tracer with irqs disabled


    * Ingo Molnar wrote:

    > * Steven Rostedt wrote:
    >
    > > Running hackbench 3 times with the irqs disabled and 3 times with
    > > the preempt disabled function tracer yielded:
    > >
    > > tracing type times entries recorded
    > > ------------ -------- ----------------
    > > irq disabled 43.393 166433066
    > > 43.282 166172618
    > > 43.298 166256704
    > >
    > > preempt disabled 38.969 159871710
    > > 38.943 159972935
    > > 39.325 161056510

    >
    > your numbers might be correct, but i found that hackbench is not
    > reliable boot-to-boot - it can easily produce 10% systematic noise
    > or more. (perhaps depending on how the various socket data
    > structures happen to be allocated)
    >
    > the really conclusive way to test this would be to add a hack that
    > either does preempt disable or irqs disable, depending on a runtime
    > flag - and then observe how hackbench performance reacts to the
    > value of that flag.


    .... which is exactly what your patch implements :-)

    > note that preempt-disable will also produce less trace entries,
    > especially in very irq-rich workloads. Hence it will be "faster".


    this point still holds. Do we have any good guess about the 'captured
    trace events per second' rate in the two cases, are they the same?

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  7. Re: [PATCH 1/3] ftrace: ftrace_preempt_disable


    i've applied your 3 patches to tip/tracing/ftrace:

    b2a866f: ftrace: function tracer with irqs disabled
    182e9f5: ftrace: insert in the ftrace_preempt_disable()/enable() functions
    8f0a056: ftrace: introduce ftrace_preempt_disable()/enable()

    thanks Steve!

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  8. Re: [PATCH 3/3] ftrace: function tracer with irqs disabled


    On Tue, 4 Nov 2008, Ingo Molnar wrote:

    >
    > * Ingo Molnar wrote:
    >
    > > * Steven Rostedt wrote:
    > >
    > > > Running hackbench 3 times with the irqs disabled and 3 times with
    > > > the preempt disabled function tracer yielded:
    > > >
    > > > tracing type times entries recorded
    > > > ------------ -------- ----------------
    > > > irq disabled 43.393 166433066
    > > > 43.282 166172618
    > > > 43.298 166256704
    > > >
    > > > preempt disabled 38.969 159871710
    > > > 38.943 159972935
    > > > 39.325 161056510

    > >
    > > your numbers might be correct, but i found that hackbench is not
    > > reliable boot-to-boot - it can easily produce 10% systematic noise
    > > or more. (perhaps depending on how the various socket data
    > > structures happen to be allocated)
    > >
    > > the really conclusive way to test this would be to add a hack that
    > > either does preempt disable or irqs disable, depending on a runtime
    > > flag - and then observe how hackbench performance reacts to the
    > > value of that flag.

    >
    > ... which is exactly what your patch implements :-)


    Yep ;-)

    Those numbers were done without any reboots in between. I even tried it
    several times, randomly picking to use irqs_disabled and preempt_disabled,
    and everytime preempt_disabled was around 39 secs, and irqs disabled was
    around 43.

    >
    > > note that preempt-disable will also produce less trace entries,
    > > especially in very irq-rich workloads. Hence it will be "faster".

    >
    > this point still holds. Do we have any good guess about the 'captured
    > trace events per second' rate in the two cases, are they the same?


    If you look at the end of my change log, I printed stats from a patch I
    added that counted the times that ftrace recursed, but did not record.
    Those numbers were quite big with preempt_disabled.

    >> With irq disabled: 1,150 times the function tracer did not trace due to
    >> recursion.
    >> with preempt disabled: 5,117,718 times.


    When we used the preempt disabled version, we lost 5 million traces, as
    suppose to the irq disabled which was only 1,150 traces lost.

    Considering that we had 166,256,704 traces total, that 5 million is only
    4% lost of traces. Still quite a lot. But again, this is an extreme,
    because we are tracing hackbench.

    -- Steve

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  9. Re: [PATCH 3/3] ftrace: function tracer with irqs disabled


    * Steven Rostedt wrote:

    > > > > tracing type times entries recorded
    > > > > ------------ -------- ----------------
    > > > > irq disabled 43.393 166433066
    > > > > 43.282 166172618
    > > > > 43.298 166256704
    > > > >
    > > > > preempt disabled 38.969 159871710
    > > > > 38.943 159972935
    > > > > 39.325 161056510


    > When we used the preempt disabled version, we lost 5 million traces,
    > as suppose to the irq disabled which was only 1,150 traces lost.
    >
    > Considering that we had 166,256,704 traces total, that 5 million is
    > only 4% lost of traces. Still quite a lot. But again, this is an
    > extreme, because we are tracing hackbench.


    there's about 10% difference between the two hackbench results - so
    the lack of 5% of the traces could make up for about half of that
    overhead.

    anyway, that still leaves the other 5% as the _true_ overhead of IRQ
    disable.

    is there some other workload that does not lose this many trace
    entries, making it easier to compare irqs-off against preempt-off?

    Ingo
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  10. [PATCH] ftrace: ftrace_preempt_disable comment fix


    Impact: comment fixes

    Updates to ftrace_preempt_disable comments as recommended to me
    by Andrew Morton.

    Signed-off-by: Steven Rostedt
    ---
    kernel/trace/trace.h | 10 +++++-----
    1 file changed, 5 insertions(+), 5 deletions(-)

    Index: linux-trace.git/kernel/trace/trace.h
    ================================================== =================
    --- linux-trace.git.orig/kernel/trace/trace.h 2008-11-04 16:32:57.000000000 -0500
    +++ linux-trace.git/kernel/trace/trace.h 2008-11-04 16:36:11.000000000 -0500
    @@ -425,17 +425,17 @@ extern struct tracer nop_trace;
    * ftrace_preempt_disable - disable preemption scheduler safe
    *
    * When tracing can happen inside the scheduler, there exists
    - * cases that the tracing might happen before the need_resched
    + * cases where tracing might happen before the need_resched
    * flag is checked. If this happens and the tracer calls
    * preempt_enable (after a disable), a schedule might take place
    * causing an infinite recursion.
    *
    * To prevent this, we read the need_recshed flag before
    - * disabling preemption. When we want to enable preemption we
    - * check the flag, if it is set, then we call preempt_enable_no_resched.
    - * Otherwise, we call preempt_enable.
    + * disabling preemption and store it. When we want to enable preemption
    + * we check the stored flag, if it is set, then we call
    + * preempt_enable_no_resched. Otherwise, we call preempt_enable.
    *
    - * The rational for doing the above is that if need resched is set
    + * The rationale for doing the above is that if need resched is set
    * and we have yet to reschedule, we are either in an atomic location
    * (where we do not need to check for scheduling) or we are inside
    * the scheduler and do not want to resched.


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  11. [PATCH v2] ftrace: ftrace_preempt_disable comment fix


    [
    Changes since v1:

    Fixed "need_recshed" to "need_resched"
    ]

    Impact: comment fixes

    Updates to ftrace_preempt_disable comments as recommended
    by Andrew Morton.

    Signed-off-by: Steven Rostedt
    ---
    kernel/trace/trace.h | 12 ++++++------
    1 file changed, 6 insertions(+), 6 deletions(-)

    Index: linux-trace.git/kernel/trace/trace.h
    ================================================== =================
    --- linux-trace.git.orig/kernel/trace/trace.h 2008-11-04 16:48:25.000000000 -0500
    +++ linux-trace.git/kernel/trace/trace.h 2008-11-04 16:50:30.000000000 -0500
    @@ -425,17 +425,17 @@ extern struct tracer nop_trace;
    * ftrace_preempt_disable - disable preemption scheduler safe
    *
    * When tracing can happen inside the scheduler, there exists
    - * cases that the tracing might happen before the need_resched
    + * cases where tracing might happen before the need_resched
    * flag is checked. If this happens and the tracer calls
    * preempt_enable (after a disable), a schedule might take place
    * causing an infinite recursion.
    *
    - * To prevent this, we read the need_recshed flag before
    - * disabling preemption. When we want to enable preemption we
    - * check the flag, if it is set, then we call preempt_enable_no_resched.
    - * Otherwise, we call preempt_enable.
    + * To prevent this, we read the need_resched flag before
    + * disabling preemption and store it. When we want to enable preemption
    + * we check the stored flag, if it is set, then we call
    + * preempt_enable_no_resched. Otherwise, we call preempt_enable.
    *
    - * The rational for doing the above is that if need resched is set
    + * The rationale for doing the above is that if need resched is set
    * and we have yet to reschedule, we are either in an atomic location
    * (where we do not need to check for scheduling) or we are inside
    * the scheduler and do not want to resched.


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  12. [PATCH v3] ftrace: ftrace_preempt_disable comment fix


    [
    Changes since v2:

    Andrew pointed out that "need resched" should be "need_resched".
    He's trying to be more critical than Randy Dunlap. ;-)
    ]

    Impact: comment fixes

    Updates to ftrace_preempt_disable comments as recommended
    by Andrew Morton.

    Signed-off-by: Steven Rostedt
    ---
    kernel/trace/trace.h | 12 ++++++------
    1 file changed, 6 insertions(+), 6 deletions(-)

    Index: linux-trace.git/kernel/trace/trace.h
    ================================================== =================
    --- linux-trace.git.orig/kernel/trace/trace.h 2008-11-04 16:53:46.000000000 -0500
    +++ linux-trace.git/kernel/trace/trace.h 2008-11-04 17:02:14.000000000 -0500
    @@ -425,17 +425,17 @@ extern struct tracer nop_trace;
    * ftrace_preempt_disable - disable preemption scheduler safe
    *
    * When tracing can happen inside the scheduler, there exists
    - * cases that the tracing might happen before the need_resched
    + * cases where tracing might happen before the need_resched
    * flag is checked. If this happens and the tracer calls
    * preempt_enable (after a disable), a schedule might take place
    * causing an infinite recursion.
    *
    - * To prevent this, we read the need_recshed flag before
    - * disabling preemption. When we want to enable preemption we
    - * check the flag, if it is set, then we call preempt_enable_no_resched.
    - * Otherwise, we call preempt_enable.
    + * To prevent this, we read the need_resched flag before
    + * disabling preemption and store it. When we want to enable preemption
    + * we check the stored flag, if it is set, then we call
    + * preempt_enable_no_resched. Otherwise, we call preempt_enable.
    *
    - * The rational for doing the above is that if need resched is set
    + * The rationale for doing the above is that if need_resched is set
    * and we have yet to reschedule, we are either in an atomic location
    * (where we do not need to check for scheduling) or we are inside
    * the scheduler and do not want to resched.

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

+ Reply to Thread