Hi,

This is a simple bind performance test case.

I'm using a 6 cpu hp Itanium hpux server (as bind server)for testing
bind performance, but can only ramp up to 4 cpus,and thus does not
generate fairly good performance.

I use bind9.2.0, hpux11.23, and queryperf, which is the perf test tool
from isc. queryperf input file arround 60000 lines, and named data only
has 10 records(so this is a simple test).

named.conf, nsswitch.conf, resolv.conf are also very simple, I'm sure
they point to right dns and resolve using dns first.=20

It is interesting that each queryperf client(with 50 or more concurrent
thread) can only take up 1MB network throughput, I add more clinet, and
get better performance. Clients are servers with dedicated LAN
connecting to bind server. The test result as below:

num. of client query per second
1 7000
3 15000
5 25000

monitering with glance, named wait event is stream and lan, system call
mainly on messege send/recieve, while system tables are not full at all.
I've checked that dns service use udp socket for communication, and
guest maybe the bottomneck resides on network. So I tunned some socket
cache/connection parameters, but does not improve much.

So, does anyone have any suggestions?=20
Thx.

=20
=20