Maintained by: NLnet Labs

[Unbound-users] "peering" unbound servers together

Florian Lohoff
Fri Apr 19 23:00:44 CEST 2013


On Fri, Apr 19, 2013 at 05:52:08PM +0200, Graham Beneke wrote:
> A query is received from a stub resolver for which an answer is not
> immediately available from the local cache. The resolver first forwards
> this query to a neighbor resolver (hoping for a cache hit) and then
> directly after that (or delayed by ~10 ms) begins its own full recursion.
> We end up with 2 (or more) resolvers all racing to get to the answer
> first. Whichever answer (neighbor or authoritative) is returned to the
> original server first is then cached and returned to the stub.
> This does mean that neighbor resolvers are potentially both doing the
> same recursion at the same time but I'm not too worried about this. It
> has the side effect of filling both caches with a valid answer which I
> consider a good thing. The primary objective is the fastest possible
> responses to the stub resolvers.
> I don't see any immediately obvious way to build a configuration that
> will do this - have I missed something?
> How difficult is it likely to be to build this capability into unbound?

squid used to have something called a sibling - Which would only
answer from cache not fetching it if unavailable. For this
they implemented a udp protocol eliminating the need for tcp http 

For DNS like setups using something like multicast to query
siblings would be the optimal solution.

In the end you trade cache lookups which cost CPU against hit ratio.

My guess is that adding memory to your caches is a much easier and
cheaper way to increase hit rate.
If increasing memory does not helpt increase hit rate, asking siblings
wouldnt help aswell. You simple duplicate your content but increase CPU
cycles used.

Florian Lohoff                                                 f at
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 828 bytes
Desc: Digital signature
URL: <>