this be aioCancel. Danger ahead! - squid

This is a discussion on this be aioCancel. Danger ahead! - squid ; In an effort to up performance, I've enabled async writes by changing "#define ASYNC_WRITE 1" in src/fs/aufs/store_asyncufs.h. When I bring up the async write enabled squid, cache.log starts filling rapidly with: 2004/10/29 19:49:15| this be aioCancel. Danger ahead! at a ...

+ Reply to Thread
Results 1 to 2 of 2

Thread: this be aioCancel. Danger ahead!

  1. this be aioCancel. Danger ahead!

    In an effort to up performance, I've enabled async writes by changing
    "#define ASYNC_WRITE 1" in src/fs/aufs/store_asyncufs.h.

    When I bring up the async write enabled squid, cache.log starts
    filling rapidly with:

    2004/10/29 19:49:15| this be aioCancel. Danger ahead!

    at a rate seemingly equal to request rate.

    After a period of minutes, another error appears:

    2004/10/29 19:49:29| WARNING! Your cache is running out of
    filedescriptors

    followed shortly by squid refusing to service requests. My max FD is
    set to 16384.

    The number of open files at this point, according to 'lsof', was
    greater than 6million. Under non-async write enable operation,
    typically less than 50k files are listed.

    During this time (our peak time), we experienced 200 requests/sec
    sustained for more than 60 minutes.

    I have two STABLE7 installations in seperate directories, with the
    only other difference being that one has async writes enabled, the
    other disabled.

    Below is an exerpt of cache.log from startup:

    2004/11/02 09:47:24| Starting Squid Cache version 2.5.STABLE7-20041102
    for i686-pc-linux-gnu...
    2004/11/02 09:47:24| Process ID 8685
    2004/11/02 09:47:24| With 16384 file descriptors available
    2004/11/02 09:47:24| Performing DNS Tests...
    2004/11/02 09:47:24| Successful DNS name lookup tests...
    2004/11/02 09:47:24| DNS Socket created at 0.0.0.0, port 33470, FD 4
    2004/11/02 09:47:24| Adding nameserver xxx.xxx.xxx.xxx from
    /etc/resolv.conf
    2004/11/02 09:47:24| Adding nameserver xxx.xxx.xxx.xxx from
    /etc/resolv.conf
    2004/11/02 09:47:24| Adding nameserver xxx.xxx.xxx.xxx from
    /etc/resolv.conf
    2004/11/02 09:47:24| Unlinkd pipe opened on FD 9
    2004/11/02 09:47:24| Swap maxSize 8601600 KB, estimated 661661 objects
    2004/11/02 09:47:24| Target number of buckets: 33083
    2004/11/02 09:47:24| Using 65536 Store buckets
    2004/11/02 09:47:24| Max Mem size: 8192 KB
    2004/11/02 09:47:24| Max Swap size: 8601600 KB
    2004/11/02 09:47:24| Store logging disabled
    2004/11/02 09:47:24| Rebuilding storage in /mnt/squidcache1/cache_dir
    (DIRTY)
    2004/11/02 09:47:24| Rebuilding storage in /mnt/squidcache2/cache_dir
    (DIRTY)
    2004/11/02 09:47:24| Rebuilding storage in /mnt/squidcache3/cache_dir
    (DIRTY)
    2004/11/02 09:47:24| Using Least Load store dir selection
    2004/11/02 09:47:24| Current Directory is
    /usr/local/squid-STABLE7-20041102
    2004/11/02 09:47:24| Loaded Icons.
    2004/11/02 09:47:24| Accepting HTTP connections at 0.0.0.0, port 80,
    FD 14.
    2004/11/02 09:47:24| Accepting ICP messages at 0.0.0.0, port 3130, FD
    15.
    2004/11/02 09:47:24| WCCP Disabled.
    2004/11/02 09:47:24| Ready to serve requests.
    2004/11/02 09:47:25| this be aioCancel. Danger ahead!
    2004/11/02 09:47:25| this be aioCancel. Danger ahead!
    2004/11/02 09:47:25| this be aioCancel. Danger ahead!

    The configure script is simply:
    ../configure --enable-async-io

    Other details:
    Red Hat Linux 7.3
    2.4.18-3
    Dell 1550
    P3-933
    2.5GB RAM2
    2 18GB 15k drives
    1 18GB 10k drive

    Thanks,

    Simon

  2. Re: this be aioCancel. Danger ahead!

    sdavison@hotmail.com (Simon Davison) wrote (in part) in message
    news:...
    >
    > ...
    >
    > The number of open files at this point, according to 'lsof', was
    > greater than 6million. Under non-async write enable operation,
    > typically less than 50k files are listed.


    Unless you filter lsof output very carefully, you can't use it
    to count file descriptor usage. Many open files share file
    descriptors, so it is necessary to ask lsof to report file
    descriptor addresses and count the unique ones. See the lsof
    FAQ for more information in the A to this Q: "Why is `lsof | wc`
    bigger than my system's open file limit?"

    Vic Abell, lsof author

+ Reply to Thread