socket programming using MSG_WAITALL for receiving multicast packets - Unix

This is a discussion on socket programming using MSG_WAITALL for receiving multicast packets - Unix ; Hi, I am calling function recvfrom() with flag set to MSG_WAITALL. I thought the function is supposed only to return when the buffer is completely filled up, or when there is a signal or error occurs. However, in my experiment, ...

+ Reply to Thread
Results 1 to 10 of 10

Thread: socket programming using MSG_WAITALL for receiving multicast packets

  1. socket programming using MSG_WAITALL for receiving multicast packets

    Hi,

    I am calling function recvfrom() with flag set to MSG_WAITALL.
    I thought the function is supposed only to return when the buffer is
    completely filled up, or when there is a signal or error occurs.

    However, in my experiment, the function constantly returns with fewer
    number of bytes than requested. The returned size is always the same
    as the payload size of one multicast ip packet. I wonder why?

    Is there a way to make the function (or use other function) only
    returns when the buffer is completely filled up if there is no signal
    or error occurs?? Baically I don't want frequent context switch
    between kernel and user space. What I want is to accumulate certain
    amount of the ip multicast packets before the function returns. I am
    working on Linux 2.6.

    Thanks in advance!

    John

  2. Re: socket programming using MSG_WAITALL for receiving multicastpackets

    On Mar 20, 2:00 pm, jiangxu...@gmail.com wrote:

    > I am calling function recvfrom() with flag set to MSG_WAITALL.
    > I thought the function is supposed only to return when the buffer is
    > completely filled up, or when there is a signal or error occurs.


    It returns when the entire message is received.

    > However, in my experiment, the function constantly returns with fewer
    > number of bytes than requested. The returned size is always the same
    > as the payload size of one multicast ip packet. I wonder why?


    Because you have received the entire message. In a datagram protocol,
    each datagram is a message.

    > Is there a way to make the function (or use other function) only
    > returns when the buffer is completely filled up if there is no signal
    > or error occurs??


    No.

    > Baically I don't want frequent context switch
    > between kernel and user space. What I want is to accumulate certain
    > amount of the ip multicast packets before the function returns. I am
    > working on Linux 2.6.


    Don't worry about it. You will only get frequent context switches if
    there's nothing else to do. If there's nothing else to do, why should
    you care if there are lots of context switches?

    DS

  3. Re: socket programming using MSG_WAITALL for receiving multicastpackets

    On Mar 20, 4:11*pm, David Schwartz wrote:
    > On Mar 20, 2:00 pm, jiangxu...@gmail.com wrote:
    >
    > > I am calling function recvfrom() with flag set to MSG_WAITALL.
    > > I thought the function is supposed only to return when the buffer is
    > > completely filled up, or when there is a signal or error occurs.

    >
    > It returns when the entire message is received.
    >
    > > However, in my experiment, the function constantly returns with fewer
    > > number of bytes than requested. *The returned size is always the same
    > > as the payload size of one multicast ip packet. *I wonder why?

    >
    > Because you have received the entire message. In a datagram protocol,
    > each datagram is a message.
    >
    > > Is there a way to make the function (or use other function) only
    > > returns when the buffer is completely filled up if there is no signal
    > > or error occurs??

    >
    > No.
    >
    > > Baically I don't want frequent context switch
    > > between kernel and user space. *What I want is to accumulate certain
    > > amount of the ip multicast packets before the function returns. *I am
    > > working on Linux 2.6.

    >
    > Don't worry about it. You will only get frequent context switches if
    > there's nothing else to do. If there's nothing else to do, why should
    > you care if there are lots of context switches?
    >
    > DS


    I am receiving high bit rate streamming data such as Video data. I
    think it will be more efficient with less context switch between
    kernel and user space. So if the kernel can accumulate more packets
    before wake up the function, it should be more CPU efficient, right?

    John

  4. Re: socket programming using MSG_WAITALL for receiving multicast packets

    In article
    <25d3af5a-ffeb-422c-8e69-e44bf97926df@t54g2000hsg.googlegroups.com>,
    jiangxu168@gmail.com wrote:

    > On Mar 20, 4:11*pm, David Schwartz wrote:
    > > On Mar 20, 2:00 pm, jiangxu...@gmail.com wrote:
    > >
    > > > I am calling function recvfrom() with flag set to MSG_WAITALL.
    > > > I thought the function is supposed only to return when the buffer is
    > > > completely filled up, or when there is a signal or error occurs.

    > >
    > > It returns when the entire message is received.
    > >
    > > > However, in my experiment, the function constantly returns with fewer
    > > > number of bytes than requested. *The returned size is always the same
    > > > as the payload size of one multicast ip packet. *I wonder why?

    > >
    > > Because you have received the entire message. In a datagram protocol,
    > > each datagram is a message.
    > >
    > > > Is there a way to make the function (or use other function) only
    > > > returns when the buffer is completely filled up if there is no signal
    > > > or error occurs??

    > >
    > > No.
    > >
    > > > Baically I don't want frequent context switch
    > > > between kernel and user space. *What I want is to accumulate certain
    > > > amount of the ip multicast packets before the function returns. *I am
    > > > working on Linux 2.6.

    > >
    > > Don't worry about it. You will only get frequent context switches if
    > > there's nothing else to do. If there's nothing else to do, why should
    > > you care if there are lots of context switches?
    > >
    > > DS

    >
    > I am receiving high bit rate streamming data such as Video data. I
    > think it will be more efficient with less context switch between
    > kernel and user space. So if the kernel can accumulate more packets
    > before wake up the function, it should be more CPU efficient, right?


    Regardless, there's nothing you can do about this. With datagram
    sockets, there's always a one-to-one correspondence between datagrams
    and calls to recv. How else do you expect to tell where the datagram
    boundaries are?

    The MSG_WAITALL option is only meaningful for stream sockets, since
    there are no datagram boundaries to worry about.

    --
    Barry Margolin, barmar@alum.mit.edu
    Arlington, MA
    *** PLEASE don't copy me on replies, I'll read them in the group ***

  5. Re: socket programming using MSG_WAITALL for receiving multicastpackets

    On Mar 20, 4:23 pm, jiangxu...@gmail.com wrote:

    > I am receiving high bit rate streamming data such as Video data. I
    > think it will be more efficient with less context switch between
    > kernel and user space.


    Don't worry, the scheduler agrees with you. When efficiency matters,
    there will be less context switching.

    > So if the kernel can accumulate more packets
    > before wake up the function, it should be more CPU efficient, right?


    Right, and that's exactly what the kernel will do, all by itself.

    If you test with no load, you will get lots of context switches. This
    is fine -- you can't save CPU in a jar to use later. As soon as the
    load goes up, the system will spend time doing other things and by the
    time your process gets to run, more packets will be accumulated as if
    by magic.

    Don't try to "fix" the scheduler. It's not broken.

    DS

  6. Re: socket programming using MSG_WAITALL for receiving multicast packets

    In article
    ,
    David Schwartz wrote:

    > On Mar 20, 4:23 pm, jiangxu...@gmail.com wrote:
    >
    > > I am receiving high bit rate streamming data such as Video data. I
    > > think it will be more efficient with less context switch between
    > > kernel and user space.

    >
    > Don't worry, the scheduler agrees with you. When efficiency matters,
    > there will be less context switching.
    >
    > > So if the kernel can accumulate more packets
    > > before wake up the function, it should be more CPU efficient, right?

    >
    > Right, and that's exactly what the kernel will do, all by itself.
    >
    > If you test with no load, you will get lots of context switches. This
    > is fine -- you can't save CPU in a jar to use later. As soon as the
    > load goes up, the system will spend time doing other things and by the
    > time your process gets to run, more packets will be accumulated as if
    > by magic.
    >
    > Don't try to "fix" the scheduler. It's not broken.


    What does the scheduler have to do with this? He's not talking about
    process switches, he's talking about context switches between user and
    kernel mode when calling recvfrom().

    There is overhead when going into and out of the kernel, but in this
    case there's nothing you can do about it. There's no system call for
    getting the datagrams that have been buffered all at once.

    --
    Barry Margolin, barmar@alum.mit.edu
    Arlington, MA
    *** PLEASE don't copy me on replies, I'll read them in the group ***

  7. Re: socket programming using MSG_WAITALL for receiving multicast ?packets

    David Schwartz wrote:
    > Don't worry, the scheduler agrees with you. When efficiency matters,
    > there will be less context switching.


    So, it is ok to post single-byte reads to a stream socket?-)

    rick jones
    --
    denial, anger, bargaining, depression, acceptance, rebirth...
    where do you want to be today?
    these opinions are mine, all mine; HP might not want them anyway...
    feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

  8. Re: socket programming using MSG_WAITALL for receiving multicastpackets

    On Mar 20, 4:44 pm, Barry Margolin wrote:

    > What does the scheduler have to do with this?


    The scheduler determines when the process runs after it is made ready-
    to-run by reception of the first datagram.

    > He's not talking about
    > process switches, he's talking about context switches between user and
    > kernel mode when calling recvfrom().


    When you make a call into the kernel, the largest component of the
    time it takes will be time spent running other processes (if any are
    chosen to run).

    > There is overhead when going into and out of the kernel, but in this
    > case there's nothing you can do about it. There's no system call for
    > getting the datagrams that have been buffered all at once.


    You are right, but there are two separate issues. One is how long will
    the receive function take if another process is scheduled, another is
    how long it will take if not.

    It's hard to know exactly what the OP is thinking here. But a modern
    CPU can easily do millions of system calls per second. So unless he's
    processing a *lot* of packets, the overhead due to switching into
    kernel space should be tolerable. I just benchmarked my dual P3-1Ghz
    system -- it can do 7,000,000 system calls per second. System call
    latency is much less than a microsecond.

    I was thinking his main issue was that his program would run every
    time a packet was received, thus increasing process churn on his
    system resulting in poor performance both for him and everyone else.
    If his sole concern was number of system calls, yeah, there's nothing
    he can do about that.

    DS

  9. Re: socket programming using MSG_WAITALL for receiving multicast?packets

    On Mar 20, 4:48 pm, Rick Jones wrote:

    > So, it is ok to post single-byte reads to a stream socket?-)


    That's several orders of magnitude worse than what he's doing.
    Consider a typical packet of data received, how many system calls he
    needs, and how many system calls that needs.

    There is, also, one other huge difference. Posting larger reads
    doesn't change when your process is made ready-to-run. Whether you try
    to read one byte or one thousand, you will become ready-to-run at the
    same time. He is looking for a way to change when his process becomes
    ready-to-run -- he's looking to make it later. That wastes CPU time he
    could be using to process the packet(s) he has already received.

    DS

  10. Re: socket programming using MSG_WAITALL for receiving multicast ?packets

    In article
    <37559f82-5ef4-4fa8-bfea-39dd06dc5d39@f63g2000hsf.googlegroups.com>,
    David Schwartz wrote:

    > On Mar 20, 4:48 pm, Rick Jones wrote:
    >
    > > So, it is ok to post single-byte reads to a stream socket?-)

    >
    > That's several orders of magnitude worse than what he's doing.
    > Consider a typical packet of data received, how many system calls he
    > needs, and how many system calls that needs.
    >
    > There is, also, one other huge difference. Posting larger reads
    > doesn't change when your process is made ready-to-run. Whether you try
    > to read one byte or one thousand, you will become ready-to-run at the
    > same time. He is looking for a way to change when his process becomes
    > ready-to-run -- he's looking to make it later. That wastes CPU time he
    > could be using to process the packet(s) he has already received.


    It sounded like some kind of audio processing application. There's no
    point in processing the packets faster than they can be played, you're
    just going to get blocked on the output side. If you don't need to
    process packets as soon as they arrive, some savings could be achieved
    if the recv's could be batched up.

    However, I think the OP's worries are overblown. While there have been
    operating systems that had considerable overhead to switch between user
    and kernel modes (comparable to switching between processes), I think
    most modern Unixes use pretty efficient mechanisms. Rather than having
    completely separate user and kernel address maps, they use the same
    address map and just flip some permission bits (so that when you're in
    user mode you don't have access to the kernel portion of the addres
    space).

    --
    Barry Margolin, barmar@alum.mit.edu
    Arlington, MA
    *** PLEASE don't copy me on replies, I'll read them in the group ***

+ Reply to Thread