[9fans] leak/umem question - Plan9

This is a discussion on [9fans] leak/umem question - Plan9 ; i have a process (modified upas/fs) that has quite a large amount of allocation churn. on startup, its memory footprint looks like this: quanstro 6305 0:00 0:00 6264K Pread 8.out umem reports this 73023 4673712 henter+0x8e 2435 702224 newmessage+0x16 2434 ...

+ Reply to Thread
Results 1 to 3 of 3

Thread: [9fans] leak/umem question

  1. [9fans] leak/umem question

    i have a process (modified upas/fs) that has quite a large amount
    of allocation churn. on startup, its memory footprint looks like this:
    quanstro 6305 0:00 0:00 6264K Pread 8.out
    umem reports this
    73023 4673712 henter+0x8e
    2435 702224 newmessage+0x16
    2434 405776 badd+0x24
    900 60000 strdup+0x20
    868 94176 parseunix+0x27
    4 384 newfid+0x5a
    3 91200 erealloc+0x25
    1 480 newmbox+0x15
    1 224 dirfstat+0x29
    1 32 mdirmbox+0x7d
    after churning through the whole mail box, the image now looks like
    this
    quanstro 6305 0:17 0:10 32468K Pread 8.out
    leak says no blocks are leaked and umem gives an identical report to
    the first:
    73023 4673712 henter+0x8e
    2435 702224 newmessage+0x16
    2434 405776 badd+0x24
    900 60000 strdup+0x20
    868 94176 parseunix+0x27
    4 384 newfid+0x5a
    3 91200 erealloc+0x25
    1 480 newmbox+0x15
    1 224 dirfstat+0x29
    1 32 mdirmbox+0x7d
    if i use leak -b, the image generated is all light/dark blue with just a
    few white pixels at the beginning and end. there is no apparent free space.
    the image is at www.quanstro.net / leak.png.

    where is the "missing" 28mb going to?

    - erik


  2. Re: [9fans] leak/umem question

    You didn't tell us much about your memory usage patterns.
    Do you allocate large lots of large objects and then
    free them? That would explain the larger footprint
    and the identical umem. Do you agree with the second
    allocation profile?

    It is easily possible that aux/acidleak's bitmap code
    is not quite right, and that the 28MB is in fact free.

    Try running

    pid=12345
    echo 'leakdump({'$pid'})' | acid -lpool -lleak $pid |
    grep '^(block|free) ' >/tmp/a

    and then you can paw through /tmp/a to see what
    is reported for the last 28MB of address space.

    Russ



  3. Re: [9fans] leak/umem question

    > You didn't tell us much about your memory usage patterns.
    > Do you allocate large lots of large objects and then
    > free them? That would explain the larger footprint
    > and the identical umem. Do you agree with the second
    > allocation profile?


    i should have included this information.

    the way i have things set up, the overhead is about ~6mb
    and the additional cache goal is 512k. the cache is very
    small at this point to stress the cache. although the
    goal is 1/2mb, whole messages need to be cached and
    the largest message is 11mb.

    i found a bug late last night that prevented the cache
    from being as aggressively managed as i intended. the
    image now gets up to "only" 19mb.

    (on the other hand, before caching it took 150mb,
    a size ~proportional to the mb size, not the largest message.)

    > It is easily possible that aux/acidleak's bitmap code
    > is not quite right, and that the 28MB is in fact free.
    >
    > Try running
    >
    > pid=12345
    > echo 'leakdump({'$pid'})' | acid -lpool -lleak $pid |
    > grep '^(block|free) ' >/tmp/a
    >
    > and then you can paw through /tmp/a to see what
    > is reported for the last 28MB of address space.


    ; grep free /tmp/a | sumit
    13291456
    ; grep block /tmp/a | sumit
    6329424
    grand total is 19161k.

    the image total is

    quanstro 11483 0:18 0:01 19464K Pread 8.out

    so 19161k + 296 (executable size) = 19457. this seems
    reasonable given the 11mb message.

    many thanks for the hint, russ.

    - erik



+ Reply to Thread