Bobohoolie wrote:
> Hi, again,
>
> At least try to decrease your (vmo) maxperm% and also maxclient%
>
> Try to translate your specific situation (appl. behaviour!) to the man
> ioo and vmo pages. You can 'play' with I.E. max/min page read-aheads
> (might be J2 options from ioo and vmo in case you use JFS2) but it
> depends STRONGLY on what your system is doing and how. small IO's but
> many. Big IO's but not so many. disks, users, proc's memory etc. etc.
> it all comes together.....
>
> You'll understand. Try to use the sg246478 (_AIX5....) redbook !
>
>
> tomiskra@vodatel.net wrote:
> > Hi,
> >
> > top10 is I think useles because here are about 350 users and nither get
> > more than one cpu for not long period of time
> > ( max 12%cpu 1 from 8 , for a few seconds).
> >
> > Regards
> >
> > # vmo -a
> > memory_frames = 4014080
> > pinnable_frames = 3798357
> > maxfree = 128
> > minfree = 120
> > minperm% = 20
> > minperm = 782080
> > maxperm% = 80
> > maxperm = 3128324
> > strict_maxperm = 0
> > maxpin% = 80
> > maxpin = 3211264
> > maxclient% = 80
> > lrubucket = 131072
> > defps = 1
> > nokilluid = 0
> > numpsblks = 2097152
> > npskill = 16384
> > npswarn = 65536
> > v_pinshm = 0
> > pta_balance_threshold = 50
> > pagecoloring = 0
> > framesets = 2
> > mempools = 1
> > lgpg_size = 0
> > lgpg_regions = 0
> > num_spec_dataseg = n/a
> > spec_dataseg_int = n/a
> > memory_affinity = 1
> > htabscale = n/a
> > force_relalias_lite = 0
> > relalias_percentage = 0
> > data_stagger_interval = 161
> > large_page_heap_size = n/a
> > kernel_heap_psize = n/a
> > soft_min_lgpgs_vmpool = 0
> > vmm_fork_policy = 0
> > low_ps_handling = 1
> > mbuf_heap_psize = n/a
> > strict_maxclient = 1
> > cpu_scale_memp = 8
> > lru_poll_interval = 0
> > lru_file_repage = 1
> >
> > # ioo -a
> > memory_frames = 4014080
> > minpgahead = 2
> > maxpgahead = 8
> > pd_npages = 65536
> > maxrandwrt = 0
> > numclust = 1
> > numfsbufs = 186
> > sync_release_ilock = 0
> > lvm_bufcnt = 9
> > j2_minPageReadAhead = 2
> > j2_maxPageReadAhead = 128
> > j2_nBufferPerPagerDevice = 512
> > j2_nPagesPerWriteBehindCluster = 32
> > j2_maxRandomWrite = 0
> > j2_nRandomCluster = 0
> > j2_non_fatal_crashes_system = 0
> > j2_syncModifiedMapped = 1
> > jfs_clread_enabled = 0
> > jfs_use_read_lock = 1
> > hd_pvs_opn = 6
> > hd_pbuf_cnt = 640
> > j2_inodeCacheSize = 400
> > j2_metadataCacheSize = 400
> > j2_dynamicBufferPreallocation = 16
> > j2_maxUsableMaxTransfer = 512
> > pgahd_scale_thresh = 0



Your slow telnet response could also be caused by poor DNS
configuration on your DNS servers and their lack of reverse resolution.
I've seen it a ton of times where telnetd's trying to reverse resolve
the client who is trying to attach. It won't do it all the time which
makes it seem strange. A simple way to test it to just move your
/etc/resolv.conf file to a backup file name and see if you still see
the slow connect times on telnet.