Re: [9fans] plan 9 overcommits memory? - Plan9

This is a discussion on Re: [9fans] plan 9 overcommits memory? - Plan9 ; If system calls were the only way to change memory allocation, one could probably keep a strict accounting of pages allocated and fail system calls that require more VM than is available. But neither Plan 9 nor Unix works that ...

+ Reply to Thread
Page 1 of 3 1 2 3 LastLast
Results 1 to 20 of 52

Thread: Re: [9fans] plan 9 overcommits memory?

  1. Re: [9fans] plan 9 overcommits memory?

    If system calls were the only way to change memory allocation, one
    could probably keep a strict accounting of pages allocated and fail
    system calls that require more VM than is available. But neither Plan
    9 nor Unix works that way. The big exception is stack growth. The
    kernel automatically extends a process's stack segment as needed. On
    the pc, Plan 9 currently limits user-mode stacks to 16MB. On a CPU
    server with 200 processes (fairly typical), that's 3.2GB of VM one
    would have to commit just for stacks. With 2,000 processes, that
    would rise to 32GB just for stacks.


  2. Re: [9fans] plan 9 overcommits memory?

    On Sun, Sep 02, 2007 at 11:38:44PM -0400, geoff@plan9.bell-labs.com wrote:
    > would have to commit just for stacks. With 2,000 processes, that
    > would rise to 32GB just for stacks.


    With 4GB RAM, wouldn't you allocate at least that much swap
    no matter what?


  3. Re: [9fans] plan 9 overcommits memory?

    Except that swap, is, as far as I have been able to figure out, broken.

    uriel

    On 3 Sep 2007 01:35:14 -0400, Scott Schwartz wrote:
    > On Sun, Sep 02, 2007 at 11:38:44PM -0400, geoff@plan9.bell-labs.com wrote:
    > > would have to commit just for stacks. With 2,000 processes, that
    > > would rise to 32GB just for stacks.

    >
    > With 4GB RAM, wouldn't you allocate at least that much swap
    > no matter what?


  4. Re: [9fans] plan 9 overcommits memory?

    > If system calls were the only way to change memory allocation, one
    > could probably keep a strict accounting of pages allocated and fail
    > system calls that require more VM than is available. But neither Plan
    > 9 nor Unix works that way. The big exception is stack growth. The
    > kernel automatically extends a process's stack segment as needed. On
    > the pc, Plan 9 currently limits user-mode stacks to 16MB. On a CPU
    > server with 200 processes (fairly typical), that's 3.2GB of VM one
    > would have to commit just for stacks. With 2,000 processes, that
    > would rise to 32GB just for stacks.


    16MB for stacks seems awful high to me. are there any programs that
    need even 1/32th of that? 512k is still 32k levels of recursion of
    a function needing 4 long arguments. a quick count on my home machine
    and some coraid servers don't show any processes using more than 1
    page of stack.

    doing strict accounting on the pages allocated i think would be an
    improvement. i also don't see a reason not to shrink the maximum
    stack size.

    the current behavior seems pretty exploitable to me. even remotely,
    if one can force stack/brk allocation via smtp, telnet or whatnot.

    - erik


  5. Re: [9fans] plan 9 overcommits memory?

    >> would have to commit just for stacks. With 2,000 processes, that
    >> would rise to 32GB just for stacks.

    >
    > With 4GB RAM, wouldn't you allocate at least that much swap
    > no matter what?


    that's pretty expensive if you're booting from flash and not using a remote
    fileserver. 8GB flash is expensive, not to mention deadly slow.

    also, why should i have to have swap? i really don't want it. it
    introduces new failure modes and could introduce wide latency
    variations. linux called, it wants it's choppy, laggy ui back.

    - erik


  6. Re: [9fans] plan 9 overcommits memory?

    >> If system calls were the only way to change memory allocation, one
    >> could probably keep a strict accounting of pages allocated and fail
    >> system calls that require more VM than is available. But neither Plan
    >> 9 nor Unix works that way. The big exception is stack growth. The
    >> kernel automatically extends a process's stack segment as needed. On
    >> the pc, Plan 9 currently limits user-mode stacks to 16MB. On a CPU
    >> server with 200 processes (fairly typical), that's 3.2GB of VM one
    >> would have to commit just for stacks. With 2,000 processes, that
    >> would rise to 32GB just for stacks.

    >
    > 16MB for stacks seems awful high to me. are there any programs that
    > need even 1/32th of that? 512k is still 32k levels of recursion of
    > a function needing 4 long arguments. a quick count on my home machine
    > and some coraid servers don't show any processes using more than 1
    > page of stack.
    >
    > doing strict accounting on the pages allocated i think would be an
    > improvement. i also don't see a reason not to shrink the maximum
    > stack size.
    >
    > the current behavior seems pretty exploitable to me. even remotely,
    > if one can force stack/brk allocation via smtp, telnet or whatnot.
    >
    > - erik


    Most applications probably use much less than 1 MB, but a lot depends
    on who wrote the program. Our threaded programs typically have a 4K
    or 8K (K, not M) fixed-size stack per thread and that works fine,
    although you have to remember not to declare big arrays/structs as
    local variables. malloc and free become good friends in threaded
    programs.

    As to guarantees that you won't run our of memory, they're almost
    impossible to give. Programmer generally don't know how much memory
    their applications will use, so they can't reasonably preallocate.

    You see the same thing with real time. Nobody knows how much time
    each task will consume beforehand.

    It would be cool to be able to get a handle on being able to shrink
    the memory occupied by an application dynamically. Malloc (through
    brk()) grows the memory footprint, but free does not shrink it.
    The same is true for the stack. Once allocated, it doesn't get freed
    until the process exits.

    Sape


  7. Re: [9fans] plan 9 overcommits memory?

    > Most applications probably use much less than 1 MB, but a lot depends
    > on who wrote the program. Our threaded programs typically have a 4K
    > or 8K (K, not M) fixed-size stack per thread and that works fine,
    > although you have to remember not to declare big arrays/structs as
    > local variables. malloc and free become good friends in threaded
    > programs.


    > As to guarantees that you won't run our of memory, they're almost
    > impossible to give. Programmer generally don't know how much memory
    > their applications will use, so they can't reasonably preallocate.


    that's a much stronger condition than "if there isn't backing memory,
    brk fails". perhaps that is difficult. even if the actual condition
    is estimated, wouldn't that be an improvement.

    perhaps one could reserve, say 16MB total for stack space. (or maybe
    some percentage of physical memory.) this could eliminate overcommits
    for brk'ed memory.

    > You see the same thing with real time. Nobody knows how much time
    > each task will consume beforehand.


    > It would be cool to be able to get a handle on being able to shrink
    > the memory occupied by an application dynamically. Malloc (through
    > brk()) grows the memory footprint, but free does not shrink it.
    > The same is true for the stack. Once allocated, it doesn't get freed
    > until the process exits.


    yes it would. does plan 9 have programs that could make use of this
    currently?

    - erik


  8. Re: [9fans] plan 9 overcommits memory?

    >> It would be cool to be able to get a handle on being able to shrink
    >> the memory occupied by an application dynamically. Malloc (through
    >> brk()) grows the memory footprint, but free does not shrink it.
    >> The same is true for the stack. Once allocated, it doesn't get freed
    >> until the process exits.

    >
    > yes it would. does plan 9 have programs that could make use of this
    > currently?


    No, and it would be hard to do it because you'd need ways to compact
    fragmented memory after a lot of mallocs and frees. And then, you'd
    need a way to fix the pointers after compacting.

    Sape


  9. Re: [9fans] plan 9 overcommits memory?

    quanstro@quanstro.net wrote:
    > also, why should i have to have swap? i really don't want it. it
    > introduces new failure modes and could introduce wide latency
    > variations. linux called, it wants it's choppy, laggy ui back.
    >


    Also, it's broken, broken, broken on Plan 9 and nobody wants to fix it.
    The upside to this is that we can just say how we don't want it anyway,
    there's no conceivable reason anyone would want swap, and operating
    systems with working swap suck


    John "Has a Swap Partition and Doesn't Know Why" Floren


  10. Re: [9fans] plan 9 overcommits memory?

    On 9/3/07, john@csplan9.rit.edu wrote:
    >
    > Also, it's broken, broken, broken on Plan 9 and nobody wants to fix it.
    > The upside to this is that we can just say how we don't want it anyway,
    > there's no conceivable reason anyone would want swap, and operating
    > systems with working swap suck
    >
    >
    > John "Has a Swap Partition and Doesn't Know Why" Floren
    >


    Isnīt it more like John "wants somebody else to fix his swap instead of doing
    himself" Floren?. If you think something is broken, fix it instead of
    complaining.
    If noone else likes it, at least you have your problem solved...

    --
    - curiosity sKilled the cat

  11. Re: [9fans] plan 9 overcommits memory?

    > On 9/3/07, john@csplan9.rit.edu wrote:
    >>
    >> Also, it's broken, broken, broken on Plan 9 and nobody wants to fix it.
    >> The upside to this is that we can just say how we don't want it anyway,
    >> there's no conceivable reason anyone would want swap, and operating
    >> systems with working swap suck
    >>
    >>
    >> John "Has a Swap Partition and Doesn't Know Why" Floren
    >>

    >
    > IsnÂīt it more like John "wants somebody else to fix his swap instead of doing
    > himself" Floren?. If you think something is broken, fix it instead of
    > complaining.
    > If noone else likes it, at least you have your problem solved...
    >


    I don't actually need the swap partition, it's just there... ummm... not
    sure why; I installed on this machine before I found out that swap is
    broken. And it's not that I *think* swap is broken; it's been confirmed
    by others. If I ever dig up a really old laptop with 32 MB of RAM or
    something, it could be worth it to try fixing swap, but since that itch
    doesn't exist I'm not going to scratch it.

    John


  12. Re: [9fans] plan 9 overcommits memory?

    > Also, it's broken, broken, broken on Plan 9

    but could you describe what antisocial behavior it exhibits and how one
    could reproduce this behavior? i have never used to-disk paging on plan 9,
    so i don't know.

    > and nobody wants to fix it.


    this has been a good discussion so far. let's not go off in a bad direction.

    > The upside to this is that we can just say how we don't want it anyway,
    > there's no conceivable reason anyone would want swap, and operating
    > systems with working swap suck


    not sure how to parse this. is there a particular case where you need to-disk
    paging? i don't see the use of to-disk paging. perhaps my vision is limited.

    in the one case where i might find it useful -- in embedded systems, there's
    typically more ram than persistant storage, so paging to "disk" makes no sense.

    - erik


  13. Re: [9fans] plan 9 overcommits memory?

    > but could you describe what antisocial behavior it exhibits and how one
    > could reproduce this behavior? i have never used to-disk paging on plan 9,
    > so i don't know.



    Last time I tried the machine did freeze, like rock solid. It happen at some
    point after the swap partition was being used (saw its usage increase in stats).
    Not always the same time interval from hitting return (to consume
    memory) and get
    the thing frozen. But this was several years ago, IIRC.

  14. Re: [9fans] plan 9 overcommits memory?

    >> Also, it's broken, broken, broken on Plan 9
    >
    > but could you describe what antisocial behavior it exhibits and how one
    > could reproduce this behavior? i have never used to-disk paging on plan 9,
    > so i don't know.
    >


    Well, when I used it on an old 32 MB laptop (terminal) and a 64 MB
    desktop (cpu server), swap would seem to work all right until you
    hit about 30-40% usage. This was the case with both systems; when
    I asked about it, a couple other people mentioned the same behavior.
    The thing is, it's pretty hard to test swap under normal usage; the only
    time I ran into this problem was while compiling a new kernel.

    >> and nobody wants to fix it.

    >
    > this has been a good discussion so far. let's not go off in a bad direction.


    I was just noting that when it has previously come up, the general
    consensus is that nobody wants to fix it, which is actually pretty
    reasonable--I'm guessing, as has been mentioned before, that the
    number of people who could potentially need/want swap is very low,
    especially since memory for older computers seems to grow on trees
    (around here at least).

    >
    >> The upside to this is that we can just say how we don't want it anyway,
    >> there's no conceivable reason anyone would want swap, and operating
    >> systems with working swap suck

    >
    > not sure how to parse this. is there a particular case where you need to-disk
    > paging? i don't see the use of to-disk paging. perhaps my vision is limited.
    >
    > in the one case where i might find it useful -- in embedded systems, there's
    > typically more ram than persistant storage, so paging to "disk" makes no sense.
    >


    It's primarily old systems, I think, like that old laptop which wasn't worth
    finding more RAM for. When I set up this shiny high-spec cpu server,
    I let it put in swap space "just in case", but a couple users barely put
    a dent in that, so it will probably never be used.


  15. Re: [9fans] plan 9 overcommits memory?

    >>> Also, it's broken, broken, broken on Plan 9
    >>
    >> but could you describe what antisocial behavior it exhibits and how one
    >> could reproduce this behavior? i have never used to-disk paging on plan 9,
    >> so i don't know.
    >>

    >
    > Well, when I used it on an old 32 MB laptop (terminal) and a 64 MB
    > desktop (cpu server), swap would seem to work all right until you
    > hit about 30-40% usage. This was the case with both systems; when
    > I asked about it, a couple other people mentioned the same behavior.
    > The thing is, it's pretty hard to test swap under normal usage; the only
    > time I ran into this problem was while compiling a new kernel.
    >


    I forgot to write what happened when swap broke--like Nemo, I found
    that the machine would lock solid, requiring a reboot.

    John


  16. Re: [9fans] plan 9 overcommits memory?

    One might allocate at least 3.2GB of swap for a 4GB machine, but many
    of our machines run with no swap, and we're probably not alone. And
    200 processes are not a lot. Would you really have over 32GB of swap
    allocated for a 4GB machine with 2,000 processes?

    Programs can use a surprising amount of stack space. A recent notable
    example is venti/copy when copying from a nightly fossil dump score.
    I think we want to be generous about maximum stack sizes.

    I don't think that estimates of VM usage would be an improvement. If
    we can't get it exactly right, there will always be complaints.


  17. Re: [9fans] plan 9 overcommits memory?

    > I don't actually need the swap partition, it's just there... ummm... not
    > sure why; I installed on this machine before I found out that swap is
    > broken. And it's not that I *think* swap is broken; it's been confirmed


    it worked adequately to cover minor shortfalls in memory, which could happen
    in the best of machines. now there is typically so much physical memory
    it hardly ever is invoked on my systems, unless i'm burning a CD from ramfs
    and get the numbers wrong.


  18. Re: [9fans] plan 9 overcommits memory?

    >> Well, when I used it on an old 32 MB laptop (terminal) and a 64 MB
    >> desktop (cpu server), swap would seem to work all right until you
    >> hit about 30-40% usage. This was the case with both systems; when
    >> I asked about it, a couple other people mentioned the same behavior.
    >> The thing is, it's pretty hard to test swap under normal usage; the only
    >> time I ran into this problem was while compiling a new kernel.
    >>

    >
    > I forgot to write what happened when swap broke--like Nemo, I found
    > that the machine would lock solid, requiring a reboot.


    years ago i would compile and link kernels on a 4mbyte 386-16/sx (really! and using cpu -c to run
    awk, because there wasn't a 387). i was in the same room as the file server.
    you could tell when it was paging, which had a distinctive, dramatic sound.
    it paged frequently when linking a kernel. it survived.
    if it's broken now, it sounds as though something changed that probably could
    be tracked down and repaired. (i tend to suspect the presence of notes,
    including alarms, but that's just a suspicion, because of the interruptions in the kernel).
    why bother? perhaps the underlying cause is messing up something else too.
    could use a useful simple test case, though. ideally, without graphics.


  19. Re: [9fans] plan 9 overcommits memory?

    > One might allocate at least 3.2GB of swap for a 4GB machine, but many
    > of our machines run with no swap, and we're probably not alone. And
    > 200 processes are not a lot. Would you really have over 32GB of swap
    > allocated for a 4GB machine with 2,000 processes?
    >
    > Programs can use a surprising amount of stack space. A recent notable
    > example is venti/copy when copying from a nightly fossil dump score.
    > I think we want to be generous about maximum stack sizes.
    >
    > I don't think that estimates of VM usage would be an improvement. If
    > we can't get it exactly right, there will always be complaints.


    if venti/copy's current behavior could be worked around by allocating stuff
    instead of using the stack. we don't have to base design around what
    venti/copy does today.

    why would it be unacceptable to have a maximum stack allocation
    system-wide? say 16MB. this would allow is not to overcommit memory.

    if we allow overcomitted memory, *any* access of brk'd memory might page
    fault. this seems like a real step backwards in error recovery as most programs
    assume that malloc either returns n bytes of valid memory or fails. since
    this assumption is false, either we need to make it true or fix most programs.

    upas/fs fails in this way for us all the time.

    this would have more serious consequences if, say, venti or fossil suffered
    a similar fate.

    - erik


  20. Re: [9fans] plan 9 overcommits memory?

    venti/copy is just an example; programs may legitimately have large
    stacks.

    If your machines are regularly running out of VM, something is wrong
    in your environment. I would argue that we'd be better off fixing
    upas/fs to be less greedy with memory than contorting the system to
    try to avoid overcommitting memory. If one did change the system to
    enforce a limit of 16MB for the aggregate of all system stacks, what
    would happen when a process needed to grow its stack and the 16MB were
    full? Checking malloc returns cannot suffice.


+ Reply to Thread
Page 1 of 3 1 2 3 LastLast