Hello folks,

I hope that's the right place for the following problem.

I'm writing a device driver, which handles a special timing hardware. A
daemon watches the device file, logs all events to a file and a named
pipe. Finally a X application (using Motif) should display the incoming
timing signals from the named pipe on the GUI. Therefor, I use the
XtAppAddInput function in the following manner:

void TimingEventCallBack(XtPointer UserData, int* Source,
XtInputId* Input)
/* read out the timing signal data [...] */


int OpenTimingPipe(void)

if (RoTimingPipe == -1)
return E_RO_PIPE_OPEN;

RoTimingPipeStream = fdopen(RoTimingPipe, "r");
if (!RoTimingPipeStream)

return E_RO_NOERROR;

int InitTimingConnection(void)
if (OpenTimingPipe() == E_RO_NOERROR) {
RoPipeInput = XtAppAddInput(Application, RoTimingPipe,
(XtPointer) XtInputReadMask,
TimingEventCallBack, NULL);
return 1;
return 0;

The problem is, that the callback function (TimingEventCallBack) is
called permanently as soon some data has been available on the pipe,
although the pipe was emptied afterwards. Due to this the process claims
approx. 100% of the CPU. Has anyone an idea what's the reason - and
maybe a solution - for this unpleasant effect ?

I created the pipe for testing via command line using "mkfifo" and wrote
something on it with "cat exmpdata > mypipe" or "echo." Somewhere I read
about a similar effect when using disk files as input source for
XtAddAppInput. Is it this the answer? Or are there any flags, which may



P.S. I'm using SuSE Linux 9.3 with kernel
fup2: comp.unix.programmer