We like to keep the query volume balanced between *data*centers* even
when work is being done on one or more nameservers in a particular data
center. So each data center gets a VIP and the traffic stays nice and
balanced even when we're doing maintenance.

This is using Cisco CSS'es, by the way, not F5s. But the same basic idea.

- Kevin

Jeff Lightner wrote:
> If you were working on the master wouldn't it just go to your secondary
> server (or vice versa) anyway without the F5? Do you do it to reduce
> latency for people trying to resolve the domains you host?
>
> That is to say if you've already put the two DNS servers in all your
> Registrars wouldn't that take care of it?
>
> -----Original Message-----
> From: bind-users-bounce@isc.org [mailto:bind-users-bounce@isc.org] On
> Behalf Of Kevin Darcy
> Sent: Tuesday, February 20, 2007 3:33 PM
> To: bind-users@isc.org
> Subject: Re: F5 and DNS
>
> We implement it so that we can do maintenance and/or lease-replacements
> on our servers transparently and without having to change anything at
> the registrar (which can be a pain when you host thousands of domains).
>
> - Kevin
>
> Jeff Lightner wrote:
>
>> I believe you can so long as you have a secondary NIC in each to do
>>

> the
>
>> transfers from master to slave.
>>
>> However to everything on the VIP it's going to look like one server.
>> You'd have to make sure you only used the one VIP in your Registrars'
>> records.
>>
>> However since DNS is designed to have two servers there doesn't seem
>>

> to
>
>> be much reason to do the Big-IP load balance and failover in the first
>> place just for DNS.
>>
>> -----Original Message-----
>> From: bind-users-bounce@isc.org [mailto:bind-users-bounce@isc.org] On
>> Behalf Of Sangoi, Nehal (GE Supply, consultant)
>> Sent: Tuesday, February 20, 2007 11:11 AM
>> To: bind-users@isc.org
>> Subject: F5 and DNS
>>
>> Hi All
>>
>> Can I still have the Master/Slave configuration of DNS on servers when
>> they serve through a VIP set on Big-IP in load balance and failover
>> mode?
>>
>>
>>
>> Thanks
>> Nehal
>>
>>
>>
>>
>>
>>
>>

>
>
>
>
>
>