Is sysfs the right place to get cache and CPU topology info? - Kernel

This is a discussion on Is sysfs the right place to get cache and CPU topology info? - Kernel ; I'm being asked by library developers about what is the best way for them to get information about the CPU cache structure and the CPU topology from the kernel. When I said "x86 puts it in sysfs and we'll do ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 21

Thread: Is sysfs the right place to get cache and CPU topology info?

  1. Is sysfs the right place to get cache and CPU topology info?

    I'm being asked by library developers about what is the best way for
    them to get information about the CPU cache structure and the CPU
    topology from the kernel. When I said "x86 puts it in sysfs and we'll
    do the same on powerpc" I got a response pointing me at this statement
    in Documentation/sysfs-rules.txt (the first paragraph):

    "The kernel-exported sysfs exports internal kernel implementation details
    and depends on internal kernel structures and layout. It is agreed upon
    by the kernel developers that the Linux kernel does not provide a stable
    internal API. As sysfs is a direct export of kernel internal
    structures, the sysfs interface cannot provide a stable interface either;
    it may always change along with internal kernel changes."

    They read that to mean that sysfs is not a suitable interface for them
    to use to get information about the system. In particular they read
    that to mean that if they do code their library to read sysfs, it will
    change in the future in such a way as to break their code.

    In other words, they see sysfs as being completely useless for them
    because they can't depend on it as a stable interface. Which is
    reasonable given the quoted paragraph, but on the other hand, I don't
    believe we break userspace interfaces as blithely as that paragraph
    suggests.

    So which is it? Can they rely on the CPU cache and topology
    information under /sys/devices/system/cpu/cpu*, and rely on having
    that information there essentially forever? Or are they correct in
    saying sysfs is useless and we need to find some other way to expose
    the cache and topology information?

    Thanks,
    Paul.
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. Re: Is sysfs the right place to get cache and CPU topology info?

    On Wed, 2 Jul 2008 16:27:25 +1000 Paul Mackerras wrote:

    > I'm being asked by library developers about what is the best way for
    > them to get information about the CPU cache structure and the CPU
    > topology from the kernel. When I said "x86 puts it in sysfs and we'll
    > do the same on powerpc" I got a response pointing me at this statement
    > in Documentation/sysfs-rules.txt (the first paragraph):
    >
    > "The kernel-exported sysfs exports internal kernel implementation details
    > and depends on internal kernel structures and layout. It is agreed upon
    > by the kernel developers that the Linux kernel does not provide a stable
    > internal API. As sysfs is a direct export of kernel internal
    > structures, the sysfs interface cannot provide a stable interface either;
    > it may always change along with internal kernel changes."


    Those are dopey weasel words and they should be removed.

    If we put stuff in sysfs then people WILL use it and we WILL need to
    support it for ever. Pointing at some document and saying "call my
    lawyer" just won't cut it.

    sysfs is part of the kernel ABI. We should design our interfaces there
    as carefully as we design any others.

    > They read that to mean that sysfs is not a suitable interface for them
    > to use to get information about the system. In particular they read
    > that to mean that if they do code their library to read sysfs, it will
    > change in the future in such a way as to break their code.
    >
    > In other words, they see sysfs as being completely useless for them
    > because they can't depend on it as a stable interface. Which is
    > reasonable given the quoted paragraph, but on the other hand, I don't
    > believe we break userspace interfaces as blithely as that paragraph
    > suggests.


    Well it's up to them - they own the files. If they later change them
    and break their own interfaces (and presumably their own applications),
    well, perhaps they have chosen an inappropriate career?

    > So which is it? Can they rely on the CPU cache and topology
    > information under /sys/devices/system/cpu/cpu*, and rely on having
    > that information there essentially forever? Or are they correct in
    > saying sysfs is useless and we need to find some other way to expose
    > the cache and topology information?


    Use sysfs. Choose a representation which is maitainable in a
    backward-compatible fashion for all time. Maintain it.

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. Re: Is sysfs the right place to get cache and CPU topology info?

    Andrew Morton writes:

    > Those are dopey weasel words and they should be removed.


    Thanks. That is my opinion too.

    > If we put stuff in sysfs then people WILL use it and we WILL need to
    > support it for ever. Pointing at some document and saying "call my
    > lawyer" just won't cut it.
    >
    > sysfs is part of the kernel ABI. We should design our interfaces there
    > as carefully as we design any others.
    >
    > > They read that to mean that sysfs is not a suitable interface for them
    > > to use to get information about the system. In particular they read
    > > that to mean that if they do code their library to read sysfs, it will
    > > change in the future in such a way as to break their code.
    > >
    > > In other words, they see sysfs as being completely useless for them
    > > because they can't depend on it as a stable interface. Which is
    > > reasonable given the quoted paragraph, but on the other hand, I don't
    > > believe we break userspace interfaces as blithely as that paragraph
    > > suggests.

    >
    > Well it's up to them - they own the files. If they later change them
    > and break their own interfaces (and presumably their own applications),
    > well, perhaps they have chosen an inappropriate career?


    We have too many "they"s, perhaps. I meant that these developers (of
    an HPC library that wants to know about cpu caches and topology) see
    sysfs as being completely useless as a source of information because
    they expect random kernel developers to keep changing it in
    incompatible ways. So "they" (library developers) don't own the files
    - they're not kernel developers at all.

    > > So which is it? Can they rely on the CPU cache and topology
    > > information under /sys/devices/system/cpu/cpu*, and rely on having
    > > that information there essentially forever? Or are they correct in
    > > saying sysfs is useless and we need to find some other way to expose
    > > the cache and topology information?

    >
    > Use sysfs. Choose a representation which is maitainable in a
    > backward-compatible fashion for all time. Maintain it.


    Thanks.

    Paul.
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  4. Re: Is sysfs the right place to get cache and CPU topology info?

    On Wed, 2 Jul 2008 19:45:43 +1000 Paul Mackerras wrote:

    > > Well it's up to them - they own the files. If they later change them
    > > and break their own interfaces (and presumably their own applications),
    > > well, perhaps they have chosen an inappropriate career?

    >
    > We have too many "they"s, perhaps. I meant that these developers (of
    > an HPC library that wants to know about cpu caches and topology) see
    > sysfs as being completely useless as a source of information because
    > they expect random kernel developers to keep changing it in
    > incompatible ways. So "they" (library developers) don't own the files
    > - they're not kernel developers at all.


    Oh. I thought "they" (or you) were proposing adding some new
    topology-exporting files to sysfs.

    If they're talking about using the existing ones then sure, those are
    cast in stone as far as I'm concerned.

    But they do need to be a _bit_ defensive. If they see a file which has
    multiple name:value fields (shouldn't happen) then don't fail if new
    tuples turn up later on. Don't expect them to always be in the same
    order. Don't fail if new files later turn up in a sysfs directory. If
    a file has (a stupid) format like /proc/self/stat then be prepared for
    new columns to appear later on, etc.

    But if basic and obvious steps like that are taken in the library, and
    later kernel changes cause that library to break, we get to fix the
    kernel to unbreak their library.
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  5. Re: Is sysfs the right place to get cache and CPU topology info?

    Andrew Morton writes:
    >
    > sysfs is part of the kernel ABI. We should design our interfaces there
    > as carefully as we design any others.


    The basic problem is that sysfs exports an internal kernel object model
    and these tend to change. To really make it stable would require
    splitting it into internal and presented interface. I would be all
    for it, but it doesn't seem realistic to me currently. If we cannot
    even get basic interfaces like the syscall capability stable how would
    you expect to stabilize the complete kobjects?

    And the specific problem with the x86 cache sysfs interface is that it's so
    complicated that no human can really read it directly. This means to
    actually use it you need some kind of frontend (i have a cheesy
    lscache script for this). I expect that eventually we'll have a standard
    tool for this. Right now most people still rely on /proc/cpuinfo
    output (which is actually human readable!), but it only shows simple
    cache topologies (L2 only) and with L3 and more complicated ones
    being more wide spread that doesn't cut it anymore. So I expect
    eventually utils-linux will grow a standard lscache program for this.
    And I expect people will eventually just use the output frontend instead of
    sysfs.

    -Andi
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  6. Re: Is sysfs the right place to get cache and CPU topology info?

    Andrew Morton writes:

    > Oh. I thought "they" (or you) were proposing adding some new
    > topology-exporting files to sysfs.


    I was proposing adding them on powerpc using the same format and
    location as x86 already uses. That way the HPC library can use the
    one parsing routine on both x86 and powerpc.

    > If they're talking about using the existing ones then sure, those are
    > cast in stone as far as I'm concerned.
    >
    > But they do need to be a _bit_ defensive. If they see a file which has
    > multiple name:value fields (shouldn't happen) then don't fail if new
    > tuples turn up later on. Don't expect them to always be in the same
    > order. Don't fail if new files later turn up in a sysfs directory. If
    > a file has (a stupid) format like /proc/self/stat then be prepared for
    > new columns to appear later on, etc.
    >
    > But if basic and obvious steps like that are taken in the library, and
    > later kernel changes cause that library to break, we get to fix the
    > kernel to unbreak their library.


    I assume they can rely on finding the stuff they need under
    /sys/devices/system/cpu. Or do they need to traverse the whole of
    /sys, and if so, how would they know which directories they should be
    looking in?

    Paul.
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  7. Re: Is sysfs the right place to get cache and CPU topology info?

    Andi Kleen wrote:
    > Andrew Morton writes:
    > >
    > > sysfs is part of the kernel ABI. We should design our interfaces there
    > > as carefully as we design any others.

    >
    > The basic problem is that sysfs exports an internal kernel object model
    > and these tend to change. To really make it stable would require
    > splitting it into internal and presented interface.


    True, but... /sys/devices/system/cpu has been there since around 2.6.5
    iirc. A google code search for that path shows plenty of programs
    (including hal) that hard-code it. Exposed object model or not,
    changing that path would break lots of software.


    > I would be all
    > for it, but it doesn't seem realistic to me currently. If we cannot
    > even get basic interfaces like the syscall capability stable how would
    > you expect to stabilize the complete kobjects?
    >
    > And the specific problem with the x86 cache sysfs interface is that it's so
    > complicated that no human can really read it directly. This means to
    > actually use it you need some kind of frontend (i have a cheesy
    > lscache script for this).


    Human readability is nice, but a more important issue IMO is whether
    the cache interface can be considered stable enough for programs to
    rely on it. I notice there's no entry for it in Documentation/ABI.

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  8. Re: Is sysfs the right place to get cache and CPU topology info?

    Nathan Lynch wrote:
    > Andi Kleen wrote:
    >> Andrew Morton writes:
    >>> sysfs is part of the kernel ABI. We should design our interfaces there
    >>> as carefully as we design any others.

    >> The basic problem is that sysfs exports an internal kernel object model
    >> and these tend to change. To really make it stable would require
    >> splitting it into internal and presented interface.

    >
    > True, but... /sys/devices/system/cpu has been there since around 2.6.5
    > iirc. A google code search for that path shows plenty of programs
    > (including hal) that hard-code it. Exposed object model or not,
    > changing that path would break lots of software.


    Yes it would.

    But Greg is making noises of getting rid of sysdevs and it wouldn't
    surprise me if that ended up being user visible since most object
    model changes end up being visible.

    >
    >
    >> I would be all
    >> for it, but it doesn't seem realistic to me currently. If we cannot
    >> even get basic interfaces like the syscall capability stable how would
    >> you expect to stabilize the complete kobjects?
    >>
    >> And the specific problem with the x86 cache sysfs interface is that it's so
    >> complicated that no human can really read it directly. This means to
    >> actually use it you need some kind of frontend (i have a cheesy
    >> lscache script for this).

    >
    > Human readability is nice,


    When it's not human readable, then it's usually also not Joe Normal Programmer
    usable.

    > but a more important issue IMO is whether
    > the cache interface can be considered stable enough for programs to
    > rely on it. I notice there's no entry for it in Documentation/ABI.


    For x86 it follows closely the Intel CPUID architectural specification, which
    can be considered stable (at least for Intel, other vendors do not
    necessarily implement it. But for AMD it is faked at least)

    -Andi

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  9. Re: Is sysfs the right place to get cache and CPU topology info?

    On Wed, 2 Jul 2008 21:46:47 +1000 Paul Mackerras wrote:

    > > If they're talking about using the existing ones then sure, those are
    > > cast in stone as far as I'm concerned.
    > >
    > > But they do need to be a _bit_ defensive. If they see a file which has
    > > multiple name:value fields (shouldn't happen) then don't fail if new
    > > tuples turn up later on. Don't expect them to always be in the same
    > > order. Don't fail if new files later turn up in a sysfs directory. If
    > > a file has (a stupid) format like /proc/self/stat then be prepared for
    > > new columns to appear later on, etc.
    > >
    > > But if basic and obvious steps like that are taken in the library, and
    > > later kernel changes cause that library to break, we get to fix the
    > > kernel to unbreak their library.

    >
    > I assume they can rely on finding the stuff they need under
    > /sys/devices/system/cpu. Or do they need to traverse the whole of
    > /sys, and if so, how would they know which directories they should be
    > looking in?


    /sys/devices/system/cpu sounds good to me. Everyone's mounting it at
    /sys.
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  10. [PATCH] sysfs-rules.txt: reword API stability statement

    The first paragraph of this document implies that user space
    developers shouldn't use sysfs at all, but then it goes on to describe
    rules that developers should follow when accessing sysfs. Not only is
    this somewhat self-contradictory, it has been shown to discourage
    developers from using established sysfs interfaces.

    A note of caution is more appropriate than a blanket "sysfs will never
    be stable" assertion.

    Signed-off-by: Nathan Lynch
    ---

    Andrew Morton wrote:
    >
    > Those are dopey weasel words and they should be removed.


    Documentation/sysfs-rules.txt | 5 ++---
    1 files changed, 2 insertions(+), 3 deletions(-)

    diff --git a/Documentation/sysfs-rules.txt b/Documentation/sysfs-rules.txt
    index 80ef562..6049a2a 100644
    --- a/Documentation/sysfs-rules.txt
    +++ b/Documentation/sysfs-rules.txt
    @@ -3,9 +3,8 @@ Rules on how to access information in the Linux kernel sysfs
    The kernel-exported sysfs exports internal kernel implementation details
    and depends on internal kernel structures and layout. It is agreed upon
    by the kernel developers that the Linux kernel does not provide a stable
    -internal API. As sysfs is a direct export of kernel internal
    -structures, the sysfs interface cannot provide a stable interface either;
    -it may always change along with internal kernel changes.
    +internal API. Therefore, there are aspects of the sysfs interface that
    +may not be stable across kernel releases.

    To minimize the risk of breaking users of sysfs, which are in most cases
    low-level userspace applications, with a new kernel release, the users
    --
    1.5.5

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  11. Re: Is sysfs the right place to get cache and CPU topology info?

    On Wed, Jul 02, 2008 at 05:14:02PM +0200, Andi Kleen wrote:
    > Nathan Lynch wrote:
    > > Andi Kleen wrote:
    > >> Andrew Morton writes:
    > >>> sysfs is part of the kernel ABI. We should design our interfaces there
    > >>> as carefully as we design any others.
    > >> The basic problem is that sysfs exports an internal kernel object model
    > >> and these tend to change. To really make it stable would require
    > >> splitting it into internal and presented interface.

    > >
    > > True, but... /sys/devices/system/cpu has been there since around 2.6.5
    > > iirc. A google code search for that path shows plenty of programs
    > > (including hal) that hard-code it. Exposed object model or not,
    > > changing that path would break lots of software.

    >
    > Yes it would.
    >
    > But Greg is making noises of getting rid of sysdevs and it wouldn't
    > surprise me if that ended up being user visible since most object
    > model changes end up being visible.


    I hope to make sysdevs go away in such a manner that the sysfs tree does
    not change at all. That's my goal, but we still have a long ways to go
    before we can even consider attempting to do this, so don't worry about
    putting things in this location if you feel it is the best fit.

    thanks,

    greg k-h
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  12. Removing sysdevs? (was: Re: Is sysfs the right place to get cache and CPU topology info?)

    On Wednesday, 2 of July 2008, Greg KH wrote:
    > On Wed, Jul 02, 2008 at 05:14:02PM +0200, Andi Kleen wrote:
    > > Nathan Lynch wrote:
    > > > Andi Kleen wrote:
    > > >> Andrew Morton writes:
    > > >>> sysfs is part of the kernel ABI. We should design our interfaces there
    > > >>> as carefully as we design any others.
    > > >> The basic problem is that sysfs exports an internal kernel object model
    > > >> and these tend to change. To really make it stable would require
    > > >> splitting it into internal and presented interface.
    > > >
    > > > True, but... /sys/devices/system/cpu has been there since around 2.6.5
    > > > iirc. A google code search for that path shows plenty of programs
    > > > (including hal) that hard-code it. Exposed object model or not,
    > > > changing that path would break lots of software.

    > >
    > > Yes it would.
    > >
    > > But Greg is making noises of getting rid of sysdevs and it wouldn't
    > > surprise me if that ended up being user visible since most object
    > > model changes end up being visible.

    >
    > I hope to make sysdevs go away in such a manner that the sysfs tree does
    > not change at all. That's my goal, but we still have a long ways to go
    > before we can even consider attempting to do this, so don't worry about
    > putting things in this location if you feel it is the best fit.


    Speaking of which, I'm very interested in the removing of sysdevs, since they
    don't fit into the new suspend/hibernation framework I'm working on. Can you
    please tell me what the plan is?

    Thanks,
    Rafael
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  13. Re: Removing sysdevs? (was: Re: Is sysfs the right place to get cache and CPU topology info?)

    On Wed, Jul 02, 2008 at 11:41:44PM +0200, Rafael J. Wysocki wrote:
    > On Wednesday, 2 of July 2008, Greg KH wrote:
    > > On Wed, Jul 02, 2008 at 05:14:02PM +0200, Andi Kleen wrote:
    > > > Nathan Lynch wrote:
    > > > > Andi Kleen wrote:
    > > > >> Andrew Morton writes:
    > > > >>> sysfs is part of the kernel ABI. We should design our interfaces there
    > > > >>> as carefully as we design any others.
    > > > >> The basic problem is that sysfs exports an internal kernel object model
    > > > >> and these tend to change. To really make it stable would require
    > > > >> splitting it into internal and presented interface.
    > > > >
    > > > > True, but... /sys/devices/system/cpu has been there since around 2.6.5
    > > > > iirc. A google code search for that path shows plenty of programs
    > > > > (including hal) that hard-code it. Exposed object model or not,
    > > > > changing that path would break lots of software.
    > > >
    > > > Yes it would.
    > > >
    > > > But Greg is making noises of getting rid of sysdevs and it wouldn't
    > > > surprise me if that ended up being user visible since most object
    > > > model changes end up being visible.

    > >
    > > I hope to make sysdevs go away in such a manner that the sysfs tree does
    > > not change at all. That's my goal, but we still have a long ways to go
    > > before we can even consider attempting to do this, so don't worry about
    > > putting things in this location if you feel it is the best fit.

    >
    > Speaking of which, I'm very interested in the removing of sysdevs, since they
    > don't fit into the new suspend/hibernation framework I'm working on. Can you
    > please tell me what the plan is?


    The plan is:
    - remaining driver core cleanups to allow for multiple drivers
    to be bound to individual devices
    - add multiple binding support to the core
    - migrate existing sysdevs to struct device, now that multiple
    binding is allowed
    - delete sysdev structure
    - profit!

    It's that first step that is taking a while, the last big changes will
    be going into 2.6.27 to help accomplish this, after that merge happens
    for 2.6.27-rc1 I'll be working on the remaining steps.

    thanks,

    greg k-h
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  14. Re: Removing sysdevs?


    > The plan is:
    > - remaining driver core cleanups to allow for multiple drivers
    > to be bound to individual devices
    > - add multiple binding support to the core
    > - migrate existing sysdevs to struct device, now that multiple
    > binding is allowed
    > - delete sysdev structure
    > - profit!


    I hope the sysdev semantics of running with interrupts etc. will be kept?

    >
    > It's that first step that is taking a while, the last big changes will
    > be going into 2.6.27 to help accomplish this, after that merge happens
    > for 2.6.27-rc1 I'll be working on the remaining steps.


    So what's up with the sysdev attribute passing changes i posted? Are you
    going to put them in or will the conversion happen so soon that they're
    not worth it?

    I eventually need the passed attribute for the dynamic bank patches.

    -Andi
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  15. Re: Removing sysdevs? (was: Re: Is sysfs the right place to get cache and CPU topology info?)

    On Wednesday, 2 of July 2008, Greg KH wrote:
    > On Wed, Jul 02, 2008 at 11:41:44PM +0200, Rafael J. Wysocki wrote:
    > > On Wednesday, 2 of July 2008, Greg KH wrote:
    > > > On Wed, Jul 02, 2008 at 05:14:02PM +0200, Andi Kleen wrote:
    > > > > Nathan Lynch wrote:
    > > > > > Andi Kleen wrote:
    > > > > >> Andrew Morton writes:
    > > > > >>> sysfs is part of the kernel ABI. We should design our interfaces there
    > > > > >>> as carefully as we design any others.
    > > > > >> The basic problem is that sysfs exports an internal kernel object model
    > > > > >> and these tend to change. To really make it stable would require
    > > > > >> splitting it into internal and presented interface.
    > > > > >
    > > > > > True, but... /sys/devices/system/cpu has been there since around 2.6.5
    > > > > > iirc. A google code search for that path shows plenty of programs
    > > > > > (including hal) that hard-code it. Exposed object model or not,
    > > > > > changing that path would break lots of software.
    > > > >
    > > > > Yes it would.
    > > > >
    > > > > But Greg is making noises of getting rid of sysdevs and it wouldn't
    > > > > surprise me if that ended up being user visible since most object
    > > > > model changes end up being visible.
    > > >
    > > > I hope to make sysdevs go away in such a manner that the sysfs tree does
    > > > not change at all. That's my goal, but we still have a long ways to go
    > > > before we can even consider attempting to do this, so don't worry about
    > > > putting things in this location if you feel it is the best fit.

    > >
    > > Speaking of which, I'm very interested in the removing of sysdevs, since they
    > > don't fit into the new suspend/hibernation framework I'm working on. Can you
    > > please tell me what the plan is?

    >
    > The plan is:
    > - remaining driver core cleanups to allow for multiple drivers
    > to be bound to individual devices
    > - add multiple binding support to the core
    > - migrate existing sysdevs to struct device, now that multiple
    > binding is allowed


    Once they've been migrated to struct device, will they reside on specific
    'system' bus, or will they be platform devices?

    > - delete sysdev structure
    > - profit!
    >
    > It's that first step that is taking a while, the last big changes will
    > be going into 2.6.27 to help accomplish this, after that merge happens
    > for 2.6.27-rc1 I'll be working on the remaining steps.


    Sounds good, please let me know if you need help.

    Thanks,
    Rafael
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  16. Re: Removing sysdevs?

    On Wed, Jul 02, 2008 at 11:57:00PM +0200, Andi Kleen wrote:
    >
    > > The plan is:
    > > - remaining driver core cleanups to allow for multiple drivers
    > > to be bound to individual devices
    > > - add multiple binding support to the core
    > > - migrate existing sysdevs to struct device, now that multiple
    > > binding is allowed
    > > - delete sysdev structure
    > > - profit!

    >
    > I hope the sysdev semantics of running with interrupts etc. will be kept?


    Yes, all of the current semantics should be kept.

    > > It's that first step that is taking a while, the last big changes will
    > > be going into 2.6.27 to help accomplish this, after that merge happens
    > > for 2.6.27-rc1 I'll be working on the remaining steps.

    >
    > So what's up with the sysdev attribute passing changes i posted? Are you
    > going to put them in or will the conversion happen so soon that they're
    > not worth it?
    >
    > I eventually need the passed attribute for the dynamic bank patches.


    Sorry, was side tracked by other work today.

    The patches look fine, do you want me to take them through my tree, or
    do you want to have them go through somewhere else as you have some
    dependancies on them?

    If somewhere else, that's fine with me, feel free to add:
    Acked-by: Greg Kroah-Hartman
    to them.

    thanks,

    greg k-h
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  17. Re: Removing sysdevs? (was: Re: Is sysfs the right place to get cache and CPU topology info?)

    On Thu, Jul 03, 2008 at 12:08:45AM +0200, Rafael J. Wysocki wrote:
    > On Wednesday, 2 of July 2008, Greg KH wrote:
    > > On Wed, Jul 02, 2008 at 11:41:44PM +0200, Rafael J. Wysocki wrote:
    > > > On Wednesday, 2 of July 2008, Greg KH wrote:
    > > > > On Wed, Jul 02, 2008 at 05:14:02PM +0200, Andi Kleen wrote:
    > > > > > Nathan Lynch wrote:
    > > > > > > Andi Kleen wrote:
    > > > > > >> Andrew Morton writes:
    > > > > > >>> sysfs is part of the kernel ABI. We should design our interfaces there
    > > > > > >>> as carefully as we design any others.
    > > > > > >> The basic problem is that sysfs exports an internal kernel object model
    > > > > > >> and these tend to change. To really make it stable would require
    > > > > > >> splitting it into internal and presented interface.
    > > > > > >
    > > > > > > True, but... /sys/devices/system/cpu has been there since around 2.6.5
    > > > > > > iirc. A google code search for that path shows plenty of programs
    > > > > > > (including hal) that hard-code it. Exposed object model or not,
    > > > > > > changing that path would break lots of software.
    > > > > >
    > > > > > Yes it would.
    > > > > >
    > > > > > But Greg is making noises of getting rid of sysdevs and it wouldn't
    > > > > > surprise me if that ended up being user visible since most object
    > > > > > model changes end up being visible.
    > > > >
    > > > > I hope to make sysdevs go away in such a manner that the sysfs tree does
    > > > > not change at all. That's my goal, but we still have a long ways to go
    > > > > before we can even consider attempting to do this, so don't worry about
    > > > > putting things in this location if you feel it is the best fit.
    > > >
    > > > Speaking of which, I'm very interested in the removing of sysdevs, since they
    > > > don't fit into the new suspend/hibernation framework I'm working on. Can you
    > > > please tell me what the plan is?

    > >
    > > The plan is:
    > > - remaining driver core cleanups to allow for multiple drivers
    > > to be bound to individual devices
    > > - add multiple binding support to the core
    > > - migrate existing sysdevs to struct device, now that multiple
    > > binding is allowed

    >
    > Once they've been migrated to struct device, will they reside on specific
    > 'system' bus, or will they be platform devices?


    I haven't really thought it through more than the above yet, so I don't
    know

    thanks,

    greg k-h
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  18. Re: Removing sysdevs?

    On Wed, 2008-07-02 at 23:57 +0200, Andi Kleen wrote:
    > > The plan is:
    > > - remaining driver core cleanups to allow for multiple drivers
    > > to be bound to individual devices
    > > - add multiple binding support to the core
    > > - migrate existing sysdevs to struct device, now that multiple
    > > binding is allowed
    > > - delete sysdev structure
    > > - profit!

    >
    > I hope the sysdev semantics of running with interrupts etc. will be kept?


    What "running with interrupts" ? :-) Are you talking specifically about
    suspend/resume ? In this case, normal devices also provide irq_off
    variants of suspend/resume.

    In some areas, those semantics are even a problem for sysdevs. For
    example, it's been a pain in the neck for cpufreq on powermac not to be
    able to schedule in its suspend routine. Among others...

    Cheers,
    Ben.


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  19. Re: Removing sysdevs? (was: Re: Is sysfs the right place to get cache and CPU topology info?)

    On Thursday, 3 of July 2008, Greg KH wrote:
    > On Thu, Jul 03, 2008 at 12:08:45AM +0200, Rafael J. Wysocki wrote:
    > > On Wednesday, 2 of July 2008, Greg KH wrote:
    > > > On Wed, Jul 02, 2008 at 11:41:44PM +0200, Rafael J. Wysocki wrote:
    > > > > On Wednesday, 2 of July 2008, Greg KH wrote:
    > > > > > On Wed, Jul 02, 2008 at 05:14:02PM +0200, Andi Kleen wrote:
    > > > > > > Nathan Lynch wrote:
    > > > > > > > Andi Kleen wrote:
    > > > > > > >> Andrew Morton writes:
    > > > > > > >>> sysfs is part of the kernel ABI. We should design our interfaces there
    > > > > > > >>> as carefully as we design any others.
    > > > > > > >> The basic problem is that sysfs exports an internal kernel object model
    > > > > > > >> and these tend to change. To really make it stable would require
    > > > > > > >> splitting it into internal and presented interface.
    > > > > > > >
    > > > > > > > True, but... /sys/devices/system/cpu has been there since around 2.6.5
    > > > > > > > iirc. A google code search for that path shows plenty of programs
    > > > > > > > (including hal) that hard-code it. Exposed object model or not,
    > > > > > > > changing that path would break lots of software.
    > > > > > >
    > > > > > > Yes it would.
    > > > > > >
    > > > > > > But Greg is making noises of getting rid of sysdevs and it wouldn't
    > > > > > > surprise me if that ended up being user visible since most object
    > > > > > > model changes end up being visible.
    > > > > >
    > > > > > I hope to make sysdevs go away in such a manner that the sysfs tree does
    > > > > > not change at all. That's my goal, but we still have a long ways to go
    > > > > > before we can even consider attempting to do this, so don't worry about
    > > > > > putting things in this location if you feel it is the best fit.
    > > > >
    > > > > Speaking of which, I'm very interested in the removing of sysdevs, since they
    > > > > don't fit into the new suspend/hibernation framework I'm working on. Can you
    > > > > please tell me what the plan is?
    > > >
    > > > The plan is:
    > > > - remaining driver core cleanups to allow for multiple drivers
    > > > to be bound to individual devices
    > > > - add multiple binding support to the core
    > > > - migrate existing sysdevs to struct device, now that multiple
    > > > binding is allowed

    > >
    > > Once they've been migrated to struct device, will they reside on specific
    > > 'system' bus, or will they be platform devices?

    >
    > I haven't really thought it through more than the above yet, so I don't
    > know


    This is quite important, though, because the device objects that sysdevs will
    be replaced with should provide suspend/hibernation callbacks to be run with
    interrupts disabled, while for some of them it may also be convenient to
    provide "normal" suspend/hibernation callbacks to be run with interrupts
    enabled.

    For this reason their bus type will have to be quite similar to the platform bus
    type.

    Thanks,
    Rafael
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  20. Re: Removing sysdevs?


    > The patches look fine, do you want me to take them through my tree, or
    > do you want to have them go through somewhere else as you have some
    > dependancies on them?


    Through your tree is fine. Thanks.

    Perhaps drop the x86 dynamic bank patch, it would actually conflict
    with one patch currently in the x86 tree and doesn't really
    belong into your tree.

    >
    > If somewhere else, that's fine with me, feel free to add:
    > Acked-by: Greg Kroah-Hartman
    > to them.


    -Andi
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

+ Reply to Thread
Page 1 of 2 1 2 LastLast