Question: writing END Drivers & using M_BLK's
Hello vxWorks Developers,
I implementing my first Network END driver for vxWorks 6. I followed the
readings from the vxWorks Device Drivers Developers guide and studied
the other drivers delivered with vxWorks. The driver is for the
opencores.org Fast Ethernet 100 Mbit MAC.
My driver is implemented and loaded by the MUX so far, and I can send
pings, flood pings, and so on. I test from a Linux host and use ethereal.
But sometimes, especially when I try to log in with telnet to my board
(its a 85xx PPC CPU) the Stack seems to hang.
I assume this is likely from wrong usage of the netTupleGet() and other
netPool functions. I setup my Tx and Rx BD Rings and for each BD I
assign a fixed net tuple. In the other drivers I find mostly the same
However, I find the Concept of the Memory Pools M_BLKs and Clusters a
bit.. confusing :) I wish I could simply pass the pointer with the
received Ethernet Frame to the Network stack and get a pointer from the
Network stack to a ready made Ethernet frame...
I already tried find more Infos, practical hints and "do's and donts"
about END drivers here and at WindSurf.
Does anybody who already has experience in implementing reliable END
drivers know any sources of information or can point out what pitfalls
may occur with END drivers?
I say reliable because even Windrivers delivered drivers seem to hang
some times, as I read here.
Exception 0x00e in reality.dll - universe halted