I am working on a stream driver on AIX 5.3 and observing allocation
failure with netstat -m:
(I panic the system if when i get failure from allocb....cpu for size
8192 in below output):
(0)> netstat -m

Kernel malloc statistics:

******* CPU 0 *******
By size inuse calls failed delayed free hiwat
freed
32 12 12 0 0 116
4996 0
64 192 12419 0 4 64
4996 0
128 1178 74591 0 4 6
2498 0
256 1997 306733 0 2 3
4996 0
512 192 145617 0 256 1880
6245 0
1024 180 4070 0 43 16
2498 0
2048 92 19695 0 1026 1960
3747 0
4096 5 975 0 1 0
1249 0
8192 1 1322 0 1 6
624 0
16384 1 577 0 64 259 312
200
131072 0 0 0 0 122
195 0


******* CPU 1 *******
By size inuse calls failed delayed free hiwat
freed
64 16 3004 0 1 48
4996 0
128 873 25281 0 0 23
2498 0
256 1746 65698 0 0 14
4996 0
512 24 15374 0 0 24
6245 0
1024 16 283 0 6 24
2498 0
2048 16 1134 0 14 50
3747 0
4096 2 60 0 0 0
1249 0
8192 1 919 0 1 2
624 0
16384 1 34 0 0 6
312 0
131072 0 0 0 0 8
16 0


******* CPU 2 *******
By size inuse calls failed delayed free hiwat
freed
32 53 72 0 0 75
4996 0
64 155 16189 0 2 37
4996 0
128 224 74684 0 5 64
2498 0
256 250 309359 0 10 342
4996 0
512 6417 164995 0 1040 1935
6245 0
1024 183 3929 0 49 21
2498 0
2048 6237 24265 0 4098 1961
3747 0
4096 73 1069 0 15 5
1249 0
8192 6 1110 0 2 47
624 0
16384 1537 2067 0 253 270 312
176
65536 1 1 0 1 0
156 0
131072 2 2 0 0 119
195 0


******* CPU 3 *******
By size inuse calls failed delayed free hiwat
freed
64 11 1576 0 0 53
4996 0
128 0 23715 0 0 64
2498 0
256 0 56858 0 0 304
4996 0
512 18 10162 0 2 30
6245 0
1024 10 215 0 5 22
2498 0
2048 11 666 0 12 25
3747 0
4096 0 40 0 0 2
1249 0
8192 1 655 0 0 41
624 0
16384 0 17 0 1 5
312 0
131072 0 0 0 0 8
16 0


******* CPU 4 *******
By size inuse calls failed delayed free hiwat
freed
32 33 60 0 0 95
4996 0
64 112 7848 0 3 80
4996 0
128 61 75812 0 0 67
2498 0
256 205 266290 0 0 387
4996 0
512 225 143350 0 3 55
6245 0
1024 98 2680 0 32 38
2498 0
2048 90 8410 0 22 42
3747 0
4096 2 1218 0 0 0
1249 0
8192 3 3310 1 2 64
624 0 <----------------------- Failed here
16384 25 108 0 4 0
312 0
32768 0 3 0 1 0
156 0
131072 0 0 0 0 11
22 0


******* CPU 5 *******
By size inuse calls failed delayed free hiwat
freed
64 7 136 0 0 57
4996 0
128 0 24138 0 0 64
2498 0
256 0 62724 0 0 288
4996 0
512 13 9793 0 0 27
6245 0
1024 6 272 0 4 14
2498 0
2048 8 768 0 0 40
3747 0
4096 0 24 0 0 1
1249 0
8192 0 1366 0 0 36
624 0
16384 0 17 0 0 8
312 0
131072 0 0 0 0 8
16 0


******* CPU 6 *******
By size inuse calls failed delayed free hiwat
freed
32 74 77 0 0 54
4996 0
64 78 11969 0 3 114
4996 0
128 720 85666 0 7 16
2498 0
256 1020 301508 0 4 4
4996 0
512 2504 173055 0 257 16
6245 0
1024 61 2801 0 31 67
2498 0
2048 2089 10064 0 1045 37
3747 0
4096 4 1166 0 0 1
1249 0
8192 3 3393 0 1 40
624 0
16384 490 564 0 62 0
312 0
32768 0 1 0 1 0
156 0
131072 0 0 0 0 97
195 0


******* CPU 7 *******
By size inuse calls failed delayed free hiwat
freed
64 9 133 0 0 55
4996 0
128 0 21699 0 0 64
2498 0
256 1 51321 0 0 319
4996 0
512 11 7700 0 0 29
6245 0
1024 8 178 0 6 16
2498 0
2048 8 678 0 0 46
3747 0
4096 0 16 0 1 3
1249 0
8192 0 482 0 0 42
624 0
16384 0 9 0 0 3
312 0
131072 0 0 0 0 8
16 0

Streams mblk statistic failures:
0 high priority mblk failures
1 medium priority mblk failures
0 low priority mblk failures

I am trying to understand the output here:
1. Is it okey for allocation of mblk to fail even if there are free
buffer for that size and for greater size.
2. How do i identify the mblk request for my driver only?

Thanks