La causa por la que BIND se consume toda la memoria y que luego genera
fallas en los DNS se debe a que por defecto el caché de la aplicación no
tiene limite para el cache lo cual hace que progresivamente se vaya
consumiendo toda la memoria fisica, luego la virtual hasta que comienza
a tener fallas.
*"BIND keeps growing, and growing, and growing. BIND has no limits on
cache size; it tries to cache every record until the record expires.
Under heavy load, BIND will chew up all your physical memory, start
thrashing, chew up all your virtual memory, and then commit hara-kiri,
if it doesn't dump core first."

"According to the BIND company, BIND 9 stays within a memory resource
limit without crashing. Unfortunately, when the cache fills up, BIND 9
discards /new/ cache entries. Performance drops dramatically. The server
begins failing under moderate loads."*

Para resolver esto lo que haremos es limitar la máxima cantidad de
memoria que usa el cache de BIND al 80% (1.6GB) del total (2GB) en la
"options" a través del parámetro *max-cache-size.*

A continuación la descripción del parámetro:

*max-cache-size* defines the maximum amount of memory to use for the
server's cache, in bytes (case insensitive shortforms of k or m are
allowed). When the amount of data in the cache reaches this limit, the
server will cause records to expire prematurely so that the limit is not
exceeded. In a server with multiple views, the limit applies separately
to the cache of each view. The default is unlimited, meaning that
records are purged from the cache only when their TTLs expire. This
statement may be used in view
or a global options

Etienne wrote:
> I'm experiencing what seems to be memory leaks with Bind 9.3.2 after it runs
> for 60-100 hours. It usually happens when I do a rndc reload.
> Any of you know how to fix that?
> Here is what i can see when it crashes:
> # nslookup

> Server:
> Address:
> ** server can't find SERVFAIL
>> exit

> # top
> Mem: 525M Active, 256M Inact, 165M Wired, 47M Cache, 111M Buf, 3496K Free
> Swap: 2048M Total, 60K Used, 2048M Free
> 67330 root 96 0 516M 515M select 79:50 0.00% 0.00% named
> Every other process take no or almost no memory
> Then it craps out with those errors:
> 10:23:05named[: resolver.c:2870: unexpected error:
> 10:23:05named[: isc_timer_create: out of memory
> 10:23:20 ruff named[67330]: timer.c:650: unexpected error:
> 10:23:20 ruff named[67330]: couldn't allocate event
> 10:26:37named[: isc_timer_create: out of memory
> 10:30:01named[: dropped command channel from out of memory
> 10:59:09named[: ifiter_getifaddrs.c:61: unexpected error:
> 10:59:09named[: getting interface addresses: getifaddrs: Cannot allocate
> memory
> 11:00:00named[: dropped command channel from out of memory
> 11:00:01named[: cache cleaner could not create iterator: out of memory
> 11:00:01named[: cache.c:610: unexpected error:
> 11:00:01named[: cache cleaner: dns_dbiterator_first() failed: out of memory
> I had no response on my original post(F.R.A.T.):
> "
> I have a 2 BIND servers and the second one copies data(zone files) and
> reloads it every hour or so.
> The version of the BIND is 9.3.2
> This always happens after bind runs for 60-100hours. It crashes every time.
> 11:00:01 server named[20174]: dns_master_load: out of memory
> 11:00:01 server named[20174]: could not configure root hints
> from'named.root': out of memory
> 11:00:01 server named[20174]: reloading configuration failed: out of memory
> Now the weird thing is that I had BIND running with the exact same
> configuration on another server and it never crashed.
> Also I had the same config on the same server with another version of the OS
> and another version of bind and it never crashed.
> I browsed other posts about the out of memory issue and it said to change
> the datasize variable.
> Right now, I have no Datasize variable set in named.conf so the default is
> default(from bind's admin book).
> It says "default uses the limit that was in force when the server was
> started.". Now I have no idea how much that is and how much i should put as
> datasize variable.
> Do you know how to check the size of the datasize if it's set to default?
> Do you guys ever had this problem before and have an idea of a good
> datasize?(I guess it depends on what you do with the server and what kind of
> server it is...)
> ----------
> Operating System Resource Limits
> The server's usage of many system resources can be limited. Scaled values
> are allowed when specifying resource limits. For example, 1G can be used
> instead of 1073741824 to specify a limit of one gigabyte. unlimited requests
> unlimited use, or the maximum available amount. default uses the limit that
> was in force when the server was started. See the description of size_spec
> in Section 6.1.
> The following options set operating system resource limits for the name
> server process. Some operating systems don't support some or any of the
> limits. On such systems, a warning will be issued if the unsupported limit
> is used.
> coresize
> The maximum size of a core dump. The default is default.
> datasize
> The maximum amount of data memory the server may use. The default is
> default. This is a hard limit on server memory usage. If the server attempts
> to allocate memory in excess of this limit, the allocation will fail, which
> may in turn leave the server unable to perform DNS service. Therefore, this
> option is rarely useful as a way of limiting the amount of memory used by
> the server, but it can be used to raise an operating system data size limit
> that is too small by default. If you wish to limit the amount of memory used
> by the server, use the max-cache-size and recursive-clients options instead.
> files
> The maximum number of files the server may have open concurrently. The
> default is unlimited.
> stacksize
> The maximum amount of stack memory the server may use. The default is
> default.
> ----------
> "
> Etienne