Trying to link 2 sub processes to each other with a pipe - doesn'twork properly - Unix

This is a discussion on Trying to link 2 sub processes to each other with a pipe - doesn'twork properly - Unix ; Hello I'm trying to replicate on linux what shells do when they link multiple processes together using pipes but to start with I simply tried to spawn off "ls -l" and "more" and link them together. Everything seems to work ...

+ Reply to Thread
Results 1 to 4 of 4

Thread: Trying to link 2 sub processes to each other with a pipe - doesn'twork properly

  1. Trying to link 2 sub processes to each other with a pipe - doesn'twork properly

    Hello

    I'm trying to replicate on linux what shells do when they link
    multiple processes together using pipes but to start with I simply
    tried to spawn off "ls -l" and "more" and link them together.
    Everything seems to work except at the end "more" seems to just hang
    even though "ls" has exited and I've reaped it. Surely assume this
    means the pipe should have closed and hence "more" should receive an
    EOF down its end of the pipe on stdin but this doesn't seem to be
    happening.

    The code is below, can anyone see what I'm doing wrong? I'm sure I've
    just making some stupid mistake.

    Thanks for any help.

    #include
    #include
    #include
    #include

    enum
    {
    STDIN,
    STDOUT,
    STDERR
    };


    int main(int argc, char **argv)
    {
    int p[2];
    char *ls_argv[4] = { "ls","-l","/dev",NULL };
    char *more_argv[2] = { "more",NULL };
    int status;
    pid_t pid1;
    pid_t pid2;

    if (pipe(p) == -1)
    {
    perror("pipe() 1");
    return 1;
    }

    /* Exec ls -l */
    switch((pid1 = fork()))
    {
    case -1:
    perror("fork() 1");
    return 1;

    case 0:
    dup2(p[1],STDOUT);
    dup2(p[1],STDERR);
    close(p[0]);
    execvp("/bin/ls",ls_argv);
    perror("execve() 1");
    return 1;
    }

    /* Exec more */
    switch((pid2 = fork()))
    {
    case -1:
    perror("fork() 2");
    return 1;

    case 0:
    dup2(p[0],STDIN);
    close(p[1]);
    execvp("/bin/more",more_argv);
    perror("execve() 2");
    return 1;
    }

    waitpid(pid1,&status,0);
    puts("*** ls reaped **");
    waitpid(pid2,&status,0);
    puts("*** more reaped ***");

    return 0;


    B2003

  2. Re: Trying to link 2 sub processes to each other with a pipe -doesn't work properly

    On Sep 21, 12:06 pm, Boltar wrote:
    > I'm trying to replicate on linux what shells do when they link
    > multiple processes together using pipes but to start with I simply
    > tried to spawn off "ls -l" and "more" and link them together.
    > Everything seems to work except at the end "more" seems to just
    > hang even though "ls" has exited and I've reaped it. Surely assume
    > this means the pipe should have closed and hence "more" should
    > receive an EOF down its end of the pipe on stdin but this doesn't
    > seem to be happening.

    ....

    The process reading from the pipe will see EOF when *all* the
    instances of the write-end of the pipe are closed. There are three
    processes in your setup; one has exited, the second closed the write-
    end before execing, that leaves the third: your process calling
    waitpid() never closed the pipe ends.


    Philip Guenther

  3. Re: Trying to link 2 sub processes to each other with a pipe -doesn't work properly

    On 21 Sep, 20:06, Boltar wrote:

    > I'm trying to replicate on linux what shells do when they link
    > multiple processes together using pipes but to start with I simply
    > tried to spawn off "ls -l" and "more" and link them together.
    > Everything seems to work except at the end "more" seems to just hang
    > even though "ls" has exited and I've reaped it. Surely assume this
    > means the pipe should have closed and hence "more" should receive an
    > EOF down its end of the pipe on stdin but this doesn't seem to be
    > happening.
    >
    > The code is below, can anyone see what I'm doing wrong? I'm sure I've
    > just making some stupid mistake.


    > enum
    > {
    > * * * * STDIN,
    > * * * * STDOUT,
    > * * * * STDERR
    > };


    This is not really necessary, since
    STDIN_FILENO, etc are defined in unistd.h
    (If you prefer the shorter STDIN, of course that's
    fine, I just thought a reader might be unaware of the
    definitions given in unistd.h)


    > * * * * * * * * dup2(p[1],STDOUT);
    > * * * * * * * * dup2(p[1],STDERR);
    > * * * * * * * * close(p[0]);


    You should also close(p[1]) here. You want this
    process to have exactly 3 open file descriptors, 0, 1, 2.
    If you don't close p[1], it will still be open after
    exec.

    > * * * * waitpid(pid1,&status,0);


    Closing the pipe after dup2 is not strictly
    necessary in your case, since having the
    extra fd open in your children is not
    causing an actual problem (ls terminates
    and it's copy of p[1] is closed, you
    close more's copy of p[1] before you execed),
    but the caller of waitpid() still has p[1] open,
    and more is waiting for data from it.
    > * * * * puts("*** ls reaped **");

    close(p[1]);

    This is probably the minimal change needed
    to make your code work, but you do want to
    close after dup2 in each case above for
    thoroughness.



  4. Re: Trying to link 2 sub processes to each other with a pipe -doesn't work properly

    On Sep 21, 9:02 pm, "guent...@gmail.com" wrote:
    > On Sep 21, 12:06 pm, Boltar wrote:> I'm trying to replicate on linux what shells do when they link
    > > multiple processes together using pipes but to start with I simply
    > > tried to spawn off "ls -l" and "more" and link them together.
    > > Everything seems to work except at the end "more" seems to just
    > > hang even though "ls" has exited and I've reaped it. Surely assume
    > > this means the pipe should have closed and hence "more" should
    > > receive an EOF down its end of the pipe on stdin but this doesn't
    > > seem to be happening.

    >
    > ...
    >
    > The process reading from the pipe will see EOF when *all* the
    > instances of the write-end of the pipe are closed. There are three
    > processes in your setup; one has exited, the second closed the write-
    > end before execing, that leaves the third: your process calling
    > waitpid() never closed the pipe ends.
    >
    > Philip Guenther


    I knew I'd just made some dumb mistake. Thanks for that.

    B2003

+ Reply to Thread