Re: [9fans] Google search of the day - Plan9

This is a discussion on Re: [9fans] Google search of the day - Plan9 ; > In most /bin/sh variants (I'm not sure about original): > > ){:|:};: > > Quick denial of service. > This just in: Repeated forks can bring down a system. Story at 11. #include #include int main() { for (; ...

+ Reply to Thread
Page 1 of 4 1 2 3 ... LastLast
Results 1 to 20 of 70

Thread: Re: [9fans] Google search of the day

  1. Re: [9fans] Google search of the day

    > In most /bin/sh variants (I'm not sure about original):
    >
    > ){:|:};:
    >
    > Quick denial of service.
    >


    This just in: Repeated forks can bring down a system.
    Story at 11.

    #include
    #include

    int main() {
    for (;
    fork();
    }

    Look ma!

    John


  2. Re: [9fans] Google search of the day

    That is exactly how that shell attack works.

    At one point, however, I think that the number of dynamically
    allocated task objects should run out, and the program should just
    wind about in an infinite loop of erroneous calls to fork(). What a
    shame. I'm happy the OS I'm writing does so. It isn't a UNIX-like
    one, but the semantics are similar.

    On Feb 13, 2008, at 6:59 PM, john@csplan9.rit.edu wrote:

    >> In most /bin/sh variants (I'm not sure about original):
    >>
    >> ){:|:};:
    >>
    >> Quick denial of service.
    >>

    >
    > This just in: Repeated forks can bring down a system.
    > Story at 11.
    >
    > #include
    > #include
    >
    > int main() {
    > for (;
    > fork();
    > }
    >
    > Look ma!
    >
    > John
    >



  3. Re: [9fans] Google search of the day

    john@csplan9.rit.edu wrote:
    > for (;
    > fork();


    In genuine UNIX(tm) systems, there is a per-user process limit,
    so eventually the fork requests start failing. However, this
    program keeps trying to fork, so if you kill off some of the
    child processes it will spawn replacements.

    I don't think it counts as a proper "denial of service" attack,
    since it affects only the invoking user (well, it does bog
    down the system with swapping etc. but again, per-user resource
    bounds can address that).

  4. Re: [9fans] Google search of the day

    > john@csplan9.rit.edu wrote:
    >> for (;
    >> fork();

    >
    > In genuine UNIX(tm) systems, there is a per-user process limit,
    > so eventually the fork requests start failing. However, this
    > program keeps trying to fork, so if you kill off some of the
    > child processes it will spawn replacements.
    >
    > I don't think it counts as a proper "denial of service" attack,
    > since it affects only the invoking user (well, it does bog
    > down the system with swapping etc. but again, per-user resource
    > bounds can address that).


    It's a proper denial of service on Plan 9... I ran it while
    cpu'd into another box and found that somehow it killed *both*
    machines. IIRC, Plan 9 doesn't really have any kind of resource
    limiting, does it? (Yeah, I know, not necessary for a terminal,
    but it should be important for a cpu server)


    John


  5. Re: [9fans] Google search of the day

    >
    > It's a proper denial of service on Plan 9... I ran it while
    > cpu'd into another box and found that somehow it killed *both*
    > machines. IIRC, Plan 9 doesn't really have any kind of resource
    > limiting, does it? (Yeah, I know, not necessary for a terminal,
    > but it should be important for a cpu server)
    >


    this is a problem for unix and plan 9. as soon as you try resource
    limiting, you have a new way to dos a machine. force the hostowner
    (or root) to use up his allotment. okay, the unix guys have thought
    of this and so they limit the number of ssh connections. the problem
    is that if you access as root to the box is via ssh, you're still done.
    i don't see an easy systematic solution to this type of problem.

    as was discussed a few months ago, any system that has unreserved
    stack allocation can suffer from similiar problems. the oom killer
    on linux is a symptom of this kind of problem.

    - erik

  6. Re: [9fans] Google search of the day

    erik quanstrom wrote:
    > as was discussed a few months ago, any system that has unreserved
    > stack allocation can suffer from similiar problems. the oom killer
    > on linux is a symptom of this kind of problem.


    Speaking of such things:

    (1) Linux had/has a "feature" where the storage reserved by
    malloc/sbrk may be over-committed, so that despite a "success"
    indication from the allocation, when the application gets around
    to using the storage it could suddenly fail due to there not
    being enough to satisfay all current processes. I urged (but
    don't know whether anybody listened) that overcommitment should
    be disabled by default, with processes that want it (said to
    include "sparse array" programs, which sounds like bad design
    but that's another issue) being required to enable it by a
    specific request, or at least flagged as special in the
    executable object file. I kludged around this in my portable
    malloc implementation by having a configuration flag which if
    set caused malloc to attempt to touch every page before
    reporting success, trapping SIGSEGV in order to maintain
    control.

    (2) C, as well as many other PLs, has always had a problem in
    that there is no clean, standard mechanism to handle the
    situation in which a function invocation finds insufficient
    stack remaining to complete the linkage (instance allocation).
    This is especially problematic in memory-constrained apps such
    as many embedded systems, when the algorithm is sufficiently
    dynamic that it is impossible to predict the maximum nesting
    depth. At least with malloc failure, the program is informed
    when there is a problem and can take measures to cope with it.

    I hope people working on run-time environments will find ways
    to do better.

  7. Re: [9fans] Google search of the day

    You can turn this off with

    # echo 2 > /proc/sys/vm/overcommit_memory

    See also the kernel Documentation directory, files
    vm/overcommit-accounting and sysctl/vm.txt.


    Douglas A. Gwyn wrote:
    >
    > Speaking of such things:
    >
    > (1) Linux had/has a "feature" where the storage reserved by
    > malloc/sbrk may be over-committed,



  8. Re: [9fans] Google search of the day

    believe it or not plan 9 does exactly the same thing, as was
    discussed in august under the subject "plan 9 overcommits memory?".

    fundamentally, i think the stack problem is an operating system
    problem not a language problem. (unless you're talking about
    8 or 16-bit embedded things.) the thread library, for example
    does just fine requiring its threads to declare a stack size.

    in the case of a memory-constrained app on an embedded system,
    the solution might be to find/write an algorithm that doesn't need a stack. :-)

    - erik

    > (1) Linux had/has a "feature" where the storage reserved by
    > malloc/sbrk may be over-committed, so that despite a "success"
    > indication from the allocation, when the application gets around
    > to using the storage it could suddenly fail due to there not
    > being enough to satisfay all current processes. I urged (but
    > don't know whether anybody listened) that overcommitment should
    > be disabled by default, with processes that want it (said to
    > include "sparse array" programs, which sounds like bad design
    > but that's another issue) being required to enable it by a
    > specific request, or at least flagged as special in the
    > executable object file. I kludged around this in my portable
    > malloc implementation by having a configuration flag which if
    > set caused malloc to attempt to touch every page before
    > reporting success, trapping SIGSEGV in order to maintain
    > control.
    >
    > (2) C, as well as many other PLs, has always had a problem in
    > that there is no clean, standard mechanism to handle the
    > situation in which a function invocation finds insufficient
    > stack remaining to complete the linkage (instance allocation).
    > This is especially problematic in memory-constrained apps such
    > as many embedded systems, when the algorithm is sufficiently
    > dynamic that it is impossible to predict the maximum nesting
    > depth. At least with malloc failure, the program is informed
    > when there is a problem and can take measures to cope with it.
    >
    > I hope people working on run-time environments will find ways
    > to do better.



  9. Re: [9fans] Google search of the day

    >> (2) C, as well as many other PLs, has always had a problem in
    >> that there is no clean, standard mechanism to handle the
    >> situation in which a function invocation finds insufficient
    >> stack remaining to complete the linkage (instance allocation).
    >> This is especially problematic in memory-constrained apps such
    >> as many embedded systems, when the algorithm is sufficiently
    >> dynamic that it is impossible to predict the maximum nesting
    >> depth. At least with malloc failure, the program is informed
    >> when there is a problem and can take measures to cope with it.
    >>
    >> I hope people working on run-time environments will find ways
    >> to do better.


    FORTRAN never had this problem. Its memory needs were fixed at
    compile time. Neither did COBOL. But then again, you couldn't write
    recursive programs either; all locals were static storage class. The
    trade off to get recursion has been worth it and doesn't cause
    problems actual use. It wasn't a problem with Algol, PL/1, Pascal,
    and C programs on very small machines. Why should it be a problem
    with today's memory sizes?

    Brantley


  10. Re: [9fans] Google search of the day

    On Fri, 15 Feb 2008 14:56:57 -0000, Brantley Coile
    wrote:

    > FORTRAN never had this problem. Its memory needs were fixed at
    > compile time. Neither did COBOL. But then again, you couldn't
    > write recursive programs either; all locals were static storage
    > class. The trade off to get recursion has been worth it anddoesn't
    > cause problems actual use.


    My understanding has always been that the stack is a fundamental element
    of the x86 architecture. SP and BP registers, later ESP and EBP, are all
    about the stack. All return addresses are stored on the stack. Parameter
    passing relies on it. And I know of no other means of implementing them.
    Except by avoiding call/ret instructions and solely jmp'ing around in a
    true mess of a code. No true "procedures."

    I am almost sure the modern incarnations of FORTRAN (90, or even 77?) do
    support both true procedures and recursion. Though, I have not tried them
    so I do not have a say there.

    Automatic/scoped variables are allocated on the stack frame for procedure
    calls and/or nested code blocks (Flat Assembler's rendition of the IA-32
    "enter" instruction supports up to 32 stack frames). And without them,
    programming would be a lot harder. There is also the "growing" heap for
    implementing dynamic variables which is quite as problematic as the stack
    because it, too, can grow beyond bounds and give one headaches.

    > It wasn't a problem with Algol, PL/1, Pascal, and C programs onvery
    > small machines. Why should it be a problem with today'smemory sizes?


    I remember getting a lot of "out of stack" errors from QuickBASIC, Turbo C
    and Borland C on MS-DOS. That was back when my computer had 8 MBs of RAM
    (a Win32 PE today has a default stack size of 8 MBs!)

    These days, stack overflows are most likely to cause problems by mingling
    data (return addresses and/or parameters) with executable code. I suppose
    segmentation mechanisms are there to precisely solve that problem. Never
    curse the "segfault" again ;-)

    --
    Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

  11. Re: [9fans] Google search of the day

    > My understanding has always been that the stack is a fundamental element
    > of the x86 architecture. SP and BP registers, later ESP and EBP, are all
    > about the stack. All return addresses are stored on the stack. Parameter
    > passing relies on it. And I know of no other means of implementing them.
    > Except by avoiding call/ret instructions and solely jmp'ing around in a
    > true mess of a code. No true "procedures."


    while this is true, you are confusing calling convention and architecture.
    the arch puts some limits on calling convention, but there is no requirement
    to use the stack if you have one.

    you could have a calling convention that every function emits a call block
    with the arguments at callblock+offset and the return value(s) at callblock-
    offset. doesn't matter if the arch has a stack or not. you are free to ignore it.
    '
    > I am almost sure the modern incarnations of FORTRAN (90, or even 77?) do
    > support both true procedures and recursion. Though, I have not tried them
    > so I do not have a say there.


    fortran 95 supports recursion. this does not imply that the *language*
    requires a stack. it could be done via heap allocation. i don't know enough
    fortran to comment intelligently on how its done.

    > Automatic/scoped variables are allocated on the stack frame for procedure
    > calls and/or nested code blocks (Flat Assembler's rendition of the IA-32
    > "enter" instruction supports up to 32 stack frames). And without them,
    > programming would be a lot harder.


    some languages -- notibly c -- assume a stack frame. there are many languages,
    like fortran that do not. you wouldn't notice what the compiler is doing in
    those languages.

    > There is also the "growing" heap for
    > implementing dynamic variables which is quite as problematic as the stack
    > because it, too, can grow beyond bounds and give one headaches.


    this is an entirely different problem as heap allocations in most languages
    are *explicit* thus allowing the programmer to respond to the situation
    appropriately. iirc, v7 sort used a binary search algorithm to figure out
    how much memory it could sbrk(). e.g.

    int memsize;

    for(memsize = 1024*1024; memsize >= 4*1024
    if(sbrk(memsize)>0)
    break;

    - erik


  12. Re: [9fans] Google search of the day

    http://en.wikipedia.org/wiki/Stackless_Python

    Is it the same sort of stack ?

    It mentions Limbo as an inspiration

    The threaded / concatanative langauges such as Forth / Joy / Cat / V
    don't use a call stack, though they still use a stack for parameter
    manipulation.

    Charles Moore said that you get about 40 items on the stack in a big
    program iirc, though that was a few years ago.

    Still, not much use to C based environments.

    I'm hoping to get good enough at Forth to take one of these puppies for
    a walk one day :
    http://www.intellasys.net/index.php?...d=35&Itemid=63

  13. Re: [9fans] Google search of the day

    On Fri, 15 Feb 2008 16:07:59 -0000, erik quanstrom
    wrote:

    > while this is true, you are confusing calling convention and
    > architecture.
    > the arch puts some limits on calling convention, but there is no
    > requirement
    > to use the stack if you have one.
    >
    > you could have a calling convention that every function emits a call
    > block
    > with the arguments at callblock+offset and the return value(s) at
    > callblock-
    > offset. doesn't matter if the arch has a stack or not. you are free to
    > ignore it.
    > '


    The calling conventions I have seen are the ccall, stdcall (Windows'
    slightly modified version of the ccall), and pascal. All of them push
    parameters on the stack.

    On x86 (the only architecture I have seen), the return address is pushed
    on the stack as soon as something like "call 0x04500" or "call _myfunc" is
    executed. Not popping the right number of parameters before executing ret
    has times over been a problem with my programs.

    I understand that the push order and marshalling strategy
    (by-value/by-reference) need not be the same from one language to the
    other. Still, all "implementations" of languages on x86 (even FORTRAN,
    according to my copy of Steven Holzner's Advanced Assembly Programming)
    "have" to push their parameters on stack. I cannot see any other way to
    keep the bare minimum required for a procedure call: the return address,
    that is. Probably because I do not understand the notion of a "callblock."


    > some languages -- notibly c -- assume a stack frame. there are many
    > languages,
    > like fortran that do not. you wouldn't notice what the compiler is
    > doing in
    > those languages.


    You mean, if I disassemble a FORTRAN-compiled binary I will not see the
    classical EBP/ESP manipulation at the beginning of each procedure with
    automatic variables? I have disassembled C-compiled binaries and seen that
    (actually did it so that I could do the same in my assemly programs).

    ----snippet----
    ;enter the stack frame
    push ebp
    mov ebp, esp
    ;reserve stack space for a 32-bit automatic variable
    sub ebp, 04h

    ;leave the stack frame
    mov esp, ebp
    pop ebp
    ;returning requires the return address
    ;(the previous value of EIP) pushed on the stack
    ret
    ----snippet----

    On Fri, 15 Feb 2008 16:43:22 -0000, maht wrote:

    > The threaded / concatanative langauges such as Forth / Joy / Cat / V
    > don't use a call stack, though they still use a stack for parameter
    > manipulation.


    I have heard Forth is used for writing bootloaders (FreeBSD's loader was
    written in Forth last time I checked). If the architecture is x86, then
    issuing a call (op-code 0x9A+[ModRM]) certainly pushes EIP on the stack.
    Perhaps they use jmp for branching.

    > http://en.wikipedia.org/wiki/Stackless_Python
    >
    > Is it the same sort of stack ?


    Python is an "interpreted" language (like Perl). Everything is implemented
    on a semi-virtual machine (they usually call it the engine). Not sure
    their stack is the same as the x86 stack segment.

    "Stackless Python, or Stackless, is an experimental implementation of the
    Python programming language, so named because it avoids depending on the C
    call stack for its own stack. The language supports generators,
    microthreads, and coroutines." --Wikipedia

    --
    Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

  14. Re: [9fans] Google search of the day

    >> #include
    >> #include
    >>
    >> int main() {
    >> for (;
    >> fork();
    >> }
    >>
    >> Look ma!
    >>
    >> John


    this is the closest solution i've found:

    Patient: Doctor, it hurts when I do this.
    Doctor: Then don't do that!


  15. Re: [9fans] Google search of the day

    On Fri, Feb 15, 2008 at 8:43 AM, maht wrote:

    > Charles Moore said that you get about 40 items on the stack in a big
    > program iirc, though that was a few years ago.
    >


    he musta never done breakpoint in SunOS on a read ...

    there was so much on the stack you couldn't display it on the screen.

    ron

  16. Re: [9fans] Google search of the day

    On Fri, Feb 15, 2008 at 3:35 PM, ron minnich wrote:
    > On Fri, Feb 15, 2008 at 8:43 AM, maht wrote:
    >
    > > Charles Moore said that you get about 40 items on the stack in a big
    > > program iirc, though that was a few years ago.
    > >

    >
    > he musta never done breakpoint in SunOS on a read ...
    >
    > there was so much on the stack you couldn't display it on the screen.
    >
    > ron
    >


    that number^2 for some OS X stuff

    iru

  17. Re: [9fans] Google search of the day

    > he musta never done breakpoint in SunOS on a read ...



    I assume he meant while running Forth




  18. Re: [9fans] Google search of the day

    Forth doesn't use CALL it uses JMP

    Similar Threaded Interpretive Languages run an inner loop that does this
    pseudo code :
    (from Threaded Interpretive Languages: Their Design and Implementation
    by R. G. Loeliger)

    COLON:
    PSH I -> RS
    WA -> I
    JMP NEXT
    SEMI:
    POP RS -> I
    NEXT:
    @I -> WA
    I += 2
    RUN:
    @WA -> CA
    WA += 2
    CA -> PC

    RS is the stack PC is the program counter, procedures JMP to NEXT or
    SEMI when they are done
    I is the indirection pointer, WA is the word address (the nomenclature
    for procedure)
    The 2s are there because this code assumes 16bit addressing, @ is
    indirection
    CA is the call address

    so DUP is a word defined as (it copies the item on the top of the stack
    back to the top of the stack
    execute is a type marker, you can also have constant & colon

    dup:
    execute
    POP SP -> A
    PUSH A -> SP
    PUSH A -> SP
    JMP NEXT


    that's all very well but you then extend the assembler parts with new
    words made from the WAs (colon is the type marker for this) SEMI is
    where it JMPs to at the end

    moddiv:
    colon
    divmod
    swop
    semi



    I wrote a TIL in AVR assembler last summer but I've waiting to go on
    holiday this year so I can write some applications in it now I have a
    dev board :>





  19. Re: [9fans] Google search of the day

    On Fri, 15 Feb 2008 19:05:28 -0000, maht wrote:

    > Forth doesn't use CALL it uses JMP
    >
    > Similar Threaded Interpretive Languages run an inner loop that does this
    > pseudo code :
    > (from Threaded Interpretive Languages: Their Design and Implementation
    > by R. G. Loeliger)
    >
    > COLON:
    > PSH I -> RS
    > WA -> I
    > JMP NEXT
    > SEMI:
    > POP RS -> I
    > NEXT: @I -> WA
    > I += 2
    > RUN: @WA -> CA
    > WA += 2
    > CA -> PC
    >


    I know nothing about interpreted/interpretive languages but the above
    pseudo-code is, to my own surprise, quite readable to me.

    The "indirection pointer" seems to funcionally match the (stack) base
    pointer on x86, with the word address equalling the stack pointer. So, the
    above says:

    - Save the stack base address (PSH is PUSH, I assume).
    - Set the stack base address to the beginning of the current frame.
    - Jump to NEXT.
    - Put the return address on top of the stack.
    - Reserve space on the stack for one 16-bit (2-byte) address by
    incerementing the base pointer. On x86 one would decrement the base
    because the stack grows from higher addresses towards 0x0.
    - Set the the call address to the beginning of the procedure.
    - Set the program counter to the procedure ("call" it, in other words).

    One strange thing is that code and (stack) data seem to both grow from 0x0
    towards 0xFFFF unlike x86 where the code sits in low address and the stack
    grows downwards. You have re-implemented the calling mechanism. You still
    have a call stack and the pertaining problems, but you have lost the
    convenience of a simple call instruction.

    By the way, now I see what Erik Quanstrom meant by "alternative" calling
    conventions. Your pseudo-code is an example.

    > that's all very well but you then extend the assembler parts with new
    > words made from the WAs (colon is the type marker for this) SEMI is
    > where it JMPs to at the end
    >
    > moddiv:
    > colon
    > divmod
    > swop
    > semi


    I am genuinely lost, do not understand a word of it.

    --
    Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

  20. Re: [9fans] Google search of the day

    Let's get back to humor. Here's from ReactOS 0.3.4's shutdown screen:

    The End ..... Try the sequel, hit the reset button right now!

    You can switch off your computer now



+ Reply to Thread
Page 1 of 4 1 2 3 ... LastLast