Maintained by: NLnet Labs

[Unbound-users] memcached backend?

Attila Nagy
Mon Oct 20 10:40:44 CEST 2008


Paul Wouters wrote:
>> Pros:
>> - if you have n machines, you can use n times the memory and increase 
>> hit rate
>
> Do resolvers these days actually use more then 8GB of RAM? Because a 
> 1000 euro
> 1U server comes with 8GB and a quadcore cpu.
Well, I don't know. With 4 GB (unbound using about 3.5 GB), I get 
numbers similar to these:
info: server stats for thread 0: 205293878 queries, 166523599 answers 
from cache, 38770279 recursions

But will try with 8 GB.

>> - you will get consistent results, no matter what server you asked
>
> Mind you that you're just changing the time stamp of the old->new 
> record change.
> While you can argue about helping bad administrators getting rid of 
> bad long TTL
> records, you can also reason the other way where a bad administrator's 
> mistake
> will show up sooner before he corrects it. I think in general, one 
> should not base
> an architecture on such a corner case.
I don't think if you have a shared cache (from which all servers answer 
the queries equally) means that this architecture is built on such a 
corner case.
The world is not perfect, and if you have a caching server, which is 
used by many people, they will find you first, when they get the "old" 
website from one machine, and the "new" from another.
They can understand and tolerate if they get only the old and then the 
new, but this kind of inconsistency hits them hard.
Of course with one server (and with one unbound process running) you 
won't have these problems. But with two, or more, these cases will 
generate customer calls, which could be easily solved (or minimized) 
with a shared cache.
I mentioned this only as a positive side effect, this is not the main 
driver behind the shared cache "concept".