Whack page faulting!? - BSD

This is a discussion on Whack page faulting!? - BSD ; I am seeing some interesting behavior while tuning a PHP class for use on a forward facing OBSD 3.6 environment. Whatching memory usage reveals some trippy anomolies, i.e. excessive page fault counts though I see no page in or page ...

+ Reply to Thread
Results 1 to 3 of 3

Thread: Whack page faulting!?

  1. Whack page faulting!?

    I am seeing some interesting behavior while tuning a PHP class for use
    on a forward facing OBSD 3.6 environment. Whatching memory usage
    reveals some trippy anomolies, i.e. excessive page fault counts though
    I see no page in or page out activity, nore any hits by the scanned by
    clock algorithm.

    So, what is the VMM doing? Why would there be this many address misses
    that require negotiation by the memory manager, outside of calling
    swapped pages back into res mem? It doesnt look like I am ever running
    low on free memory?
    Unless this is something to do with sys cals from kernel mem, heh...


    Q. Why am I seeing so much page fault activity despite no memory
    bottle-neck or page in/out activity?


    OpenBSD 3.6-current (GENERIC) #233: Wed Dec 29 00:19:38 MST 2004
    cpu0: AMD Athlon(tm) ("AuthenticAMD" 686-class) 1.10 GHz
    real mem = 536387584 (523816K)
    avail mem = 482598912 (471288K)


    Note: You will notice where I fire off the process, five or six
    iterations down ( the vm usage goes up and I start seeing lots of page
    faults [ in the thousands! ] )...
    ---cut---

    # vmstat 2 200
    procs memory page disks traps cpu
    r b w avm fre flt re pi po fr sr wd0 wd1 int sys cs us sy
    id
    0 0 0 59640 396540 6 0 0 0 0 0 4 0 240 117 31 0 0
    100
    1 0 0 59640 396540 12 0 0 0 0 0 0 0 234 71 8 0 0
    100
    0 0 0 59640 396540 4 0 0 0 0 0 0 0 231 25 8 0 0
    100
    0 0 0 59640 396540 4 0 0 0 0 0 0 0 231 23 7 0 0
    100
    0 0 0 59640 396540 6 0 0 0 0 0 0 0 231 30 8 0 0
    100
    0 0 0 59640 396540 4 0 0 0 0 0 0 0 231 27 8 0 0
    99
    0 1 0 62824 394348 798 0 0 0 0 0 221 0 347 1997 124 4 0
    96
    0 2 0 63128 393980 208 0 0 0 0 0 283 0 383 253 156 0 1
    99
    1 0 0 73688 383292 3377 0 0 0 0 0 129 0 330 1271 82 73 5
    2
    1 0 0 75800 381096 3856 0 0 0 0 0 145 0 360 1157 90 72 7
    21
    1 0 0 68444 388404 3639 0 0 0 0 0 56 0 330 928 47 70 9
    21
    1 0 0 69032 387864 4309 0 0 0 0 0 176 0 396 1460 107 68 9
    22
    1 0 0 69548 387336 3010 0 0 0 0 0 72 0 312 787 54 73 5
    22
    1 0 0 80544 376332 3588 0 0 0 0 0 94 0 311 780 66 81 4
    14
    1 0 0 75536 381364 4351 0 0 0 0 0 119 0 370 1393 76 69 9
    21

    ....any inpute would be greatly appreciated!


    SLR-


  2. Re: Whack page faulting!?

    page fault doesn't just mean "read from swap". it's also any page
    that's newly allocated. the kernel doesn't actually give you memory,
    it just sets things up so that on the first access a page fault is
    generated, and then the page is physically allocated.


  3. Re: Whack page faulting!?

    True; and I understand that, however, this is simply compressing an
    object on a system with no significant proc activity. Unless the
    process is chunkizing data in a way that requires more mmaping than is
    reasonable, the faults presented seem excessive.

    The best I can guess is that there are too many getdirentries() calls
    initiated by the app, but looking at a ktrace it doesnt look that
    excessive -just the basic NAMI call for the file and then quick address
    returns. Maybe the question is more in the line of :

    What, assuming I cant see excessive calls ( kernel or otherwise ),
    would result in such high fault counts ( up to ~7000 per poll ),
    especially if the returns are singular.
    Despite tracing the children of this proc, am I missing some kernel
    mode calls that are outside direct association, but somehow relatively
    effective? If I try to trace the entire system I will be lost for days
    in dump files, *sigh*. I havent tracked LWP's, so I dont really know
    if some other low level snooping will show a sine of the culprit...

    On an aside, this system isnt dieing under thrashing/load, I am simply
    curious what is responsible for this anomaly ( 'anomaly' as far as my
    personal experience is concerned, that is... ).



    SLR-


+ Reply to Thread