This is a discussion on burst mode negotiation - Netware ; Dear Netware Experts, I have three netware 5.1 servers. Two are in a production tree. The third is a new one in a test tree by itself. The two old ones are on 100 mbps connections. The new one is ...
Dear Netware Experts,
I have three netware 5.1 servers. Two are in a production tree. The
third is a new one in a test tree by itself. The two old ones are on
100 mbps connections. The new one is on gig-e.
I was testing a new Dell GX270 workstation with windows xp and a gig-e
connection, using an old novell program called perform3. I noticed I
was getting results like 30,000 to the new server but 3000 to the two
I also noticed that while perform3 ran on the two old servers, the xp
networking monitor showed 6 or 7 percent utilization. But when it ran
on the new server, the utilization stayed close to zero.
Perform3 gives normal results of about 10,000 to each server from a
win98 box on a 100 mbps connection. I'm having problems only with xp
and the two old servers.
Using the free packet capture program Ethereal, I found the problem
was with burst mode transfers.
The new server starts with a low burst length that gets bigger with
each burst, which I think is normal.
The two old servers both start with a burst length around 35,000 which
stays the same for each burst. This makes me think the problem is
related to the negotation of burst parameters during login.
Thanks in advance for any suggestions.