Re: x86: 4kstacks default - Kernel

This is a discussion on Re: x86: 4kstacks default - Kernel ; On Thu, 24 Apr 2008 09:36:52 +1000 David Chinner wrote: > On Wed, Apr 23, 2008 at 03:27:01PM +1000, Benjamin Herrenschmidt > wrote: > > On Sat, 2008-04-19 at 16:23 +0200, Ingo Molnar wrote: > > > * Andrew Morton ...

+ Reply to Thread
Page 8 of 8 FirstFirst ... 6 7 8
Results 141 to 154 of 154

Thread: Re: x86: 4kstacks default

  1. Re: x86: 4kstacks default

    On Thu, 24 Apr 2008 09:36:52 +1000
    David Chinner wrote:

    > On Wed, Apr 23, 2008 at 03:27:01PM +1000, Benjamin Herrenschmidt
    > wrote:
    > > On Sat, 2008-04-19 at 16:23 +0200, Ingo Molnar wrote:
    > > > * Andrew Morton wrote:
    > > >
    > > > > > config 4KSTACKS
    > > > > > bool "Use 4Kb for kernel stacks instead of 8Kb"
    > > > > > - depends on DEBUG_KERNEL
    > > > > > depends on X86_32
    > > > > > + default y
    > > > >
    > > > > This patch will cause kernels to crash.
    > > >
    > > > what mainline kernels crash and how will they crash? Fedora and
    > > > other distros have had 4K stacks enabled for years:
    > > >
    > > > $ grep 4K /boot/config-2.6.24-9.fc9
    > > > CONFIG_4KSTACKS=y
    > > >
    > > > and we've conducted tens of thousands of bootup tests with all
    > > > sorts of drivers and kernel options enabled and have yet to see a
    > > > single crash due to 4K stacks. So basically the kernel default
    > > > just follows the common distro default now. (distros and users
    > > > can still disable it)

    > >
    > > Do we routinely test nasty scenarii such as a GFP_KERNEL allocation
    > > deep in a call stack trying to swap something out to NFS ?

    >
    > I doubt it, because this is the place that a local XFS filesystem
    > typically blows a 4k stack (direct memory reclaim triggering
    > ->writepage). Boot testing does nothing to exercise the potential
    > paths for stack overflows....
    >


    THe good news is that direct reclaim is.. rare.
    And I also doubt XFS is unique here; imagine the whole stacking thing on x86-64 just the same ...

    I wonder if the direct reclaim path should avoid direct reclaim if the stack has only X bytes left.
    (where the value of X is... well we can figure that one out later)

    The rarity of direct reclaim during normal use ought to make this not a performance problem per se,
    and the benefits go further than just "XFS" or "4K stacks".

    --
    If you want to reach me at my work email, use arjan@linux.intel.com
    For development, discussion and tips for power savings,
    visit http://www.lesswatts.org
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  2. Re: x86: 4kstacks default


    On Thu, 2008-04-24 at 09:36 +1000, David Chinner wrote:
    > > Do we routinely test nasty scenarii such as a GFP_KERNEL allocation

    > deep
    > > in a call stack trying to swap something out to NFS ?

    >
    > I doubt it, because this is the place that a local XFS filesystem
    > typically blows a 4k stack (direct memory reclaim triggering
    > ->writepage). Boot testing does nothing to exercise the potential
    > paths for stack overflows....


    Yup, note even counting when the said NFS is on top of some fancy
    network stack with a driver on top of USB .... I mean, we do have
    potential for worst case scenario that I think -will- blow a 4k stack.

    Ben.


    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  3. Re: x86: 4kstacks default

    On Wed, Apr 23, 2008 at 05:45:16PM -0700, Arjan van de Ven wrote:
    > THe good news is that direct reclaim is.. rare.
    > And I also doubt XFS is unique here; imagine the whole stacking thing on x86-64 just the same ...


    It's bad news actually. Beause it means the stack overflow happens
    totally random and hard to reproduce. And no, XFS is not unique there,
    any filesystem with a complex enough writeback path (aka extents +
    delalloc + smart allocator) will have to use quite a lot here. I'll be
    my 2 cent that ext4 one finished up will run into this just as likely.

    > I wonder if the direct reclaim path should avoid direct reclaim if the stack has only X bytes left.
    > (where the value of X is... well we can figure that one out later)


    Actually direct reclaim should be totally avoided for complex
    filesystems. It's horrible for the stack and for the filesystem
    writeout policy and ondisk allocation strategies.

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  4. Re: x86: 4kstacks default

    On Thu, 2008-04-24 at 05:52 -0400, Christoph Hellwig wrote:

    > > I wonder if the direct reclaim path should avoid direct reclaim if the stack has only X bytes left.
    > > (where the value of X is... well we can figure that one out later)

    >
    > Actually direct reclaim should be totally avoided for complex
    > filesystems. It's horrible for the stack and for the filesystem
    > writeout policy and ondisk allocation strategies.


    That's basically any reclaim, even kswapd will ruin policy and block
    allocation smarts.

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  5. Re: x86: 4kstacks default

    Helge Hafting wrote:
    > Adrian Bunk wrote:
    >> What actually brings bad reputation is shipping a 4k option that is
    >> known to break under some circumstances.
    >>

    > How about making 4k stacks incompatible with those circumstances then?
    > I.e. is you select 4k stacks, then you can't select XFS because we know
    > that _may_ fail. Similiar for ndiswrapper networking, and other
    > stuff where problems have been noticed.


    Problem is, it's the storage configuration (at administration time, not
    kernel build time) that matters, too.

    I have XFS on Fedora with 4k stacks on SATA /dev/sdb1 on my x86 mythbox,
    and it's perfectly fine. But that's a nice, simple setup. If I stacked
    more things over/under it, I'd be more likely to have trouble.

    -Eric
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  6. Re: x86: 4kstacks default

    On Thursday 24 April 2008, Christoph Hellwig wrote:
    > On Wed, Apr 23, 2008 at 05:45:16PM -0700, Arjan van de Ven wrote:
    > > THe good news is that direct reclaim is.. rare.
    > > And I also doubt XFS is unique here; imagine the whole stacking thing on
    > > x86-64 just the same ...

    >
    > It's bad news actually. Beause it means the stack overflow happens
    > totally random and hard to reproduce. And no, XFS is not unique there,
    > any filesystem with a complex enough writeback path (aka extents +
    > delalloc + smart allocator) will have to use quite a lot here. I'll be
    > my 2 cent that ext4 one finished up will run into this just as likely.
    >
    > > I wonder if the direct reclaim path should avoid direct reclaim if the
    > > stack has only X bytes left. (where the value of X is... well we can
    > > figure that one out later)

    >
    > Actually direct reclaim should be totally avoided for complex
    > filesystems. It's horrible for the stack and for the filesystem
    > writeout policy and ondisk allocation strategies.


    Just as a data point, XFS isn't alone. I run through once or twice a month
    and try to get rid of any new btrfs stack pigs, but keeping under the 4k
    stack barrier is a constant challenge.

    My storage configuration is fairly simple, if we spin the wheel of stacked IO
    devices...it won't be pretty.

    Does it make more sense to kill off some brain cells on finding ways to
    dynamically increase the stack as we run out? Or even give the robust stack
    users like xfs/btrfs a way to say: I'm pretty sure this call path is going to
    hurt, please make my stack bigger now.

    We have relatively few entry points between the rest of the kernel and the FS,
    there should be some ways to compromise here.

    -chris
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  7. Re: x86: 4kstacks default

    On Thu, 24 Apr 2008 11:41:30 -0400, "Chris Mason"
    said:
    > On Thursday 24 April 2008, Christoph Hellwig wrote:
    > > On Wed, Apr 23, 2008 at 05:45:16PM -0700, Arjan van de Ven wrote:
    > > > THe good news is that direct reclaim is.. rare. And I also doubt
    > > > XFS is unique here; imagine the whole stacking thing on x86-64
    > > > just the same ...

    > >
    > > It's bad news actually. Beause it means the stack overflow happens
    > > totally random and hard to reproduce. And no, XFS is not unique
    > > there, any filesystem with a complex enough writeback path (aka
    > > extents + delalloc + smart allocator) will have to use quite a lot
    > > here. I'll be my 2 cent that ext4 one finished up will run into
    > > this just as likely.
    > >
    > > > I wonder if the direct reclaim path should avoid direct reclaim if
    > > > the stack has only X bytes left. (where the value of X is... well
    > > > we can figure that one out later)

    > >
    > > Actually direct reclaim should be totally avoided for complex
    > > filesystems. It's horrible for the stack and for the filesystem
    > > writeout policy and ondisk allocation strategies.

    >
    > Just as a data point, XFS isn't alone. I run through once or twice a
    > month and try to get rid of any new btrfs stack pigs, but keeping
    > under the 4k stack barrier is a constant challenge.
    >
    > My storage configuration is fairly simple, if we spin the wheel of
    > stacked IO devices...it won't be pretty.
    >
    > Does it make more sense to kill off some brain cells on finding ways
    > to dynamically increase the stack as we run out? Or even give the
    > robust stack users like xfs/btrfs a way to say: I'm pretty sure this
    > call path is going to hurt, please make my stack bigger now.


    Hi,

    (Rookie warning goes here.) To me, growing the stack at more or less
    random places in the kernel seems to be quite a complicated thing to do
    and it will be quite a maintainance burden to find the right spots to
    insert stack usage checks. So I'ld say: lose the dynamic aspect.

    How about unconditionally switching stacks at some defined points within
    the core code of the kernel, just before calling into any driver code,
    for example? The 4k-option has separate irq stacks already, why not have
    driver stacks too?

    I think the most important consideration to keep the stack size small
    was that non-order-0 allocations are unreliable under/after memory
    pressure due to fragmentation and that this allocation has to be done
    for each thread. It is therefore preferable not to do any higher-order
    allocations at all, unless there is a fall-back mechanism if the
    allocation fails. For higher-order stacks there isn't such a fallback...
    Can the system get by (without deadlocks at least in practice) with a
    limited number of preallocated but 'large' stacks (in addition to a
    small per-thread stack)?

    It was discussed that stack space is needed for any sleeping process.
    Could it be arranged that this waiting happens on the smallish stack, at
    least for the most common cases, while non-waiting activity can use the
    big stacks?

    Greetings,
    Alexander

    > We have relatively few entry points between the rest of the kernel and
    > the FS, there should be some ways to compromise here.
    >
    > -chris
    > --
    > To unsubscribe from this list: send the line "unsubscribe linux-
    > kernel" in the body of a message to majordomo@vger.kernel.org More
    > majordomo info at http://vger.kernel.org/majordomo-info.html Please
    > read the FAQ at http://www.tux.org/lkml/
    >
    >

    --
    Alexander van Heukelum
    heukelum@fastmail.fm

    --
    http://www.fastmail.fm - A fast, anti-spam email service.

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  8. Re: x86: 4kstacks default

    Andrew Morton linux-foundation.org> writes:

    >
    > > On Sat, 19 Apr 2008 16:23:29 +0200 Ingo Molnar elte.hu> wrote:
    > >
    > > * Andrew Morton linux-foundation.org> wrote:
    > >
    > > > > config 4KSTACKS
    > > > > bool "Use 4Kb for kernel stacks instead of 8Kb"
    > > > > - depends on DEBUG_KERNEL
    > > > > depends on X86_32
    > > > > + default y
    > > >
    > > > This patch will cause kernels to crash.

    > >
    > > what mainline kernels crash and how will they crash?

    >
    > There has been a dribble of reports - I don't have the links handy, nor did
    > I search for them.
    >
    > > Fedora and other
    > > distros have had 4K stacks enabled for years:
    > >
    > > $ grep 4K /boot/config-2.6.24-9.fc9
    > > CONFIG_4KSTACKS=y


    Here is a report - Fedora 8 default kernel, Mac Mini file server, Not Tainted.
    Attempt to copy 100Gb+ of data from a hfsplus file system on a USB drive to a
    firewire drive with XFS filesystem - I got a nasty panic with a huge stack
    backtrace. I gave up and switched to Ubuntu. With a stock kernel.org kernel I
    was able to successfully copy the data over. I still have the machine and the
    restored drives and can try to reproduce it with Fedora 9 w/4K stacks if
    anyone thinks it is worthwhile (i.e. fixable).

    Parag

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  9. Re: x86: 4kstacks default

    On Tue, 22 April 2008 11:28:19 +1000, David Chinner wrote:
    > On Mon, Apr 21, 2008 at 09:51:02PM +0200, Denys Vlasenko wrote:
    >
    > > Why xfs code is said to be 5 timed bigged than e.g. reiserfs?
    > > Does it have to be that big?

    >
    > If we cut the bulkstat code out, the handle interface, the
    > preallocation, the journalled quota, the delayed allocation, all the
    > runtime validation, the shutdown code, the debug code, the tracing
    > code, etc, then we might get down to the same size reiser....


    Just noticed this bit of FUD. Last time I did some static analysis on
    stack usage, reiserfs alone would blow away 3k, while xfs was somewhere
    below. Reiserfs was improved afaik, but I'd still expect it to be worse
    than xfs until shown otherwise.

    Maybe reiserfs simply isn't used that much in nfs+*fs+md+whatnot+scsi
    setups?

    Jörn

    --
    Courage is not the absence of fear, but rather the judgement that
    something else is more important than fear.
    -- Ambrose Redmoon
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  10. Re: x86: 4kstacks default

    Denys Vlasenko wrote:
    > On Sunday 27 April 2008 21:27, Jörn Engel wrote:
    >> On Tue, 22 April 2008 11:28:19 +1000, David Chinner wrote:
    >>> On Mon, Apr 21, 2008 at 09:51:02PM +0200, Denys Vlasenko wrote:
    >>>
    >>>> Why xfs code is said to be 5 times bigger than e.g. reiserfs?
    >>>> Does it have to be that big?
    >>> If we cut the bulkstat code out, the handle interface, the
    >>> preallocation, the journalled quota, the delayed allocation, all the
    >>> runtime validation, the shutdown code, the debug code, the tracing
    >>> code, etc, then we might get down to the same size reiser....

    >> Just noticed this bit of FUD. Last time I did some static analysis on
    >> stack usage, reiserfs alone would blow away 3k, while xfs was somewhere
    >> below.

    >
    > I'm sorry, but it's not what I said.
    > I didn't say reiserfs eats less stack. I don't know.
    > I said it is smaller.
    >
    > reiserfs/* 821474 bytes
    > xfs/* 3019689 bytes


    FWIW, the reason for that is in large part all the features Dave listed
    above, and probably more.

    And, while certainly not yet tiny, the recent trend actually is that xfs
    is getting a bit smaller:

    http://oss.sgi.com/~sandeen/xfs-linedata.png

    (note, though - the Y axis does not start at 0)

    -Eric
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  11. Re: x86: 4kstacks default

    On Sunday 27 April 2008 21:27, Jörn Engel wrote:
    > On Tue, 22 April 2008 11:28:19 +1000, David Chinner wrote:
    > > On Mon, Apr 21, 2008 at 09:51:02PM +0200, Denys Vlasenko wrote:
    > >
    > > > Why xfs code is said to be 5 times bigger than e.g. reiserfs?
    > > > Does it have to be that big?

    > >
    > > If we cut the bulkstat code out, the handle interface, the
    > > preallocation, the journalled quota, the delayed allocation, all the
    > > runtime validation, the shutdown code, the debug code, the tracing
    > > code, etc, then we might get down to the same size reiser....

    >
    > Just noticed this bit of FUD. Last time I did some static analysis on
    > stack usage, reiserfs alone would blow away 3k, while xfs was somewhere
    > below.


    I'm sorry, but it's not what I said.
    I didn't say reiserfs eats less stack. I don't know.
    I said it is smaller.

    reiserfs/* 821474 bytes
    xfs/* 3019689 bytes
    --
    vda
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  12. Re: x86: 4kstacks default

    On Monday 28 April 2008 01:08, Eric Sandeen wrote:
    > >>>> Why xfs code is said to be 5 times bigger than e.g. reiserfs?
    > >>>> Does it have to be that big?
    > >>> If we cut the bulkstat code out, the handle interface, the
    > >>> preallocation, the journalled quota, the delayed allocation, all the
    > >>> runtime validation, the shutdown code, the debug code, the tracing
    > >>> code, etc, then we might get down to the same size reiser....
    > >> Just noticed this bit of FUD. Last time I did some static analysis on
    > >> stack usage, reiserfs alone would blow away 3k, while xfs was somewhere
    > >> below.

    > >
    > > I'm sorry, but it's not what I said.
    > > I didn't say reiserfs eats less stack. I don't know.
    > > I said it is smaller.
    > >
    > > reiserfs/* 821474 bytes
    > > xfs/* 3019689 bytes

    >
    > FWIW, the reason for that is in large part all the features Dave listed
    > above, and probably more.
    >
    > And, while certainly not yet tiny, the recent trend actually is that xfs
    > is getting a bit smaller:
    >
    > http://oss.sgi.com/~sandeen/xfs-linedata.png


    ~30% line count reduction? Impressive, especially in this age
    of creeping bloat. Thanks.
    --
    vda
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  13. Re: x86: 4kstacks default

    Adrian Bunk wrote:
    > On Sun, Apr 20, 2008 at 03:06:23PM +0200, Andi Kleen wrote:
    >> Willy Tarreau wrote:
    >> ...
    >>> I have nothing against changing the default setting to 4k provided that
    >>> it is easy to get back to the save setting

    >> So you're saying that only advanced users who understand all their
    >> CONFIG options should have the safe settings? And everyone else
    >> the "only explodes once a week" mode?
    >>
    >> For me that is exactly the wrong way around.
    >>
    >> If someone is sure they know what they're doing they can set whatever
    >> crazy settings they want (given there is a quick way to check
    >> for the crazy settings in oops reports so that I can ignore those), but
    >> the default should be always safe and optimized for reliability.

    >
    > That means we'll have nearly zero testing of the "crazy setting" and
    > when someone tries it he'll have a high probability of running into some
    > problems.
    >
    > Such a "crazy setting" shouldn't be offered to users at all.
    >
    > We should either aim at 4k stacks unconditionally for all 32bit
    > architectures with 4k page size or don't allow any architecture
    > to offer 4k stacks.
    >

    I have suggested before that the solution is to allocate memory in
    "stack size" units (obviously must be a multiple of the hardware page
    size). The reason allocation fails is more often fragmentation than
    actual lack of memory, or so it has been reported.

    --
    Bill Davidsen
    "We have more to fear from the bungling of the incompetent than from
    the machinations of the wicked." - from Slashdot
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

  14. Re: x86: 4kstacks default

    Adrian Bunk wrote:
    > On Sun, Apr 20, 2008 at 02:47:17PM +0200, Willy Tarreau wrote:
    >> ...
    >> I certainly can understand that reducing memory footprint is useful, but
    >> if we want wider testing of 4k stacks, considering they may fail in error
    >> path in complex I/O environment, it's not likely during -rc kernels that
    >> we'll detect problems, and if we push them down the throat of users in a
    >> stable release, of course they will thank us very much for crashing their
    >> NFS servers in production during peak hours.

    >
    > I've seen many bugs in error paths in the kernel and fixed quite a
    > few of them - and stack problems were not a significant part of them.
    >
    > There are so many possible bugs (that also occur in practice) that
    > singling out stack usage won't gain much.
    >
    >> I have nothing against changing the default setting to 4k provided that
    >> it is easy to get back to the save setting (ie changing a config option,
    >> or better, a cmdline parameter). I just don't agree with the idea of
    >> forcing users to swim in the sh*t, it only brings bad reputation to
    >> Linux.
    >> ...

    >
    > What actually brings bad reputation is shipping a 4k option that is
    > known to break under some circumstances.
    >
    > And history has shown that as long as 8k stacks are available on i386
    > some problems will not get fixed. 4k stacks are available as an option
    > on i386 for more than 4 years, and at about as long we know that there
    > are some setups (AFAIK all that might still be present seem to include
    > XFS) that are known to not work reliably with 4k stacks.
    >
    > If we go after stability and reputation, we have to make a decision
    > whether we want to get 4k stacks on 32bit architectures with 4k page
    > size unconditionally or not at all. That's the way that gets the maximal
    > number of bugs shaken out [1] for all supported configurations before
    > they would hit a stable kernel.
    >

    A good argument for keeping the default 8k and letting people who know
    what they are doing, or think they do, test their system for 4k
    operation. Embedded systems typically have far better defined loads than
    servers or desktops, and are less likely to have different behavior
    change the stack requirements. That doesn't mean they do less, just that
    the load is usually better characterized.

    Vendors shipping a 4k stack kernel are probably not going to be happy if
    someone nfs exports an xfs filesystem on lvm, running on md raid0
    composed of raid5 arrays, containing multipath, iSCSI, SATA and nbd
    devices. No, I didn't make that up, someone asked me what I thought
    their problem was with that setup.

    The kernel is getting more complex, and I don't think that anyone but
    you is interested in making 4k stacks mandatory, or in eliminating them,
    either.

    You frequently take the attitude that something you don't like (like all
    the old but WORKING network drivers) should be removed from the kernel,
    so that people will be forced to use the new whatever and find bugs so
    they can be fixed. Unfortunately in some cases the bugs are never fixed
    and Linux loses a capability it once had.

    The arbitrary 4k limit requires a lot of work on dropping stack usage
    even more than has already been done, and is mostly an effort you want
    other people to make so you can be happy (I assume that if you were
    offering to do it all yourself you already would have), and most
    importantly it would waste a lot of developer effort on a low return
    goal, which could be used on useful new features or fixing corner case
    bugs. Or drinking beer...

    Hell, it wastes your time arguing about it, and you do lots of useful
    things when you're not trying to force your minimalist philosophy on people.

    --
    Bill Davidsen
    "We have more to fear from the bungling of the incompetent than from
    the machinations of the wicked." - from Slashdot
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

+ Reply to Thread
Page 8 of 8 FirstFirst ... 6 7 8