Maintained by: NLnet Labs

[Unbound-users] Algorithm downgrade protection

W.C.A. Wijngaards
Thu Sep 15 15:00:21 CEST 2011

Hash: SHA1


Lately there have been operational failures, where domains became
DNSSEC-bogus, for which it is possible to 'fix' unbound.  What happened
was a failure in algorithm rollover that left
un-validatable by unbound (but bind worked).  Unbound detected it as
algorithm downgrade and it failed validation.

Should we turn off algorithm downgrade protection?

There can be an option to turn it back on, but most users won't bother
to do that.  With 'algorithm-protection: yes/no' in unbound.conf.  If
there is such an option, what is then its default value?

What is algorithm downgrade?  If one algorithm is broken, say
Hash-Algorithm-X, then it is no longer safe.  Unbound, today, protects
zones that are signed with multiple algorithms by checking all the
algorithms.  Thus the strongest algorithm protects, not the weakest.

On the dnssec-deployment list, several experts have said they consider
unbound to be too strict.  It should not provide algorithm protection.
It should leniently accept these operational mistakes where a DS with an
unused algorithm is present.

Discussion summary

* algorithm downgrade is considered very unlikely.
* algorithm downgrade, of RSASHAx, has such large consequences that a
pre-emptive validator fix is not interesting.
* software update can be used to control algorithms the software
considers safe.
* the zone owner can control what algorithm is used.
* The mistake in this case - extra DS - causes all validators that
support only that algorithm, not the other, to be bogus.  So it is a
mistake where a large portion of the validators return bogus.  Do we
really need to save operators from this?
* The virginia case was a corner case with its NSEC3-related rollover.
It was the same algorithm with a different NSEC3-flag.
* false positives, such as caused by this, cause damage to dnssec
deployment.  To the willingness to turn it on.
* signers must communicate with the parents.  And this shows how it can
go wrong.  Had communication worked, the signer would not have generated
this zone.  It could just as well have been that the DS record for the
working KSK had been removed leaving the old KSK DS behind.  Or that the
new KSK was not inserted.  This is a similar operational error, for
which unbound can not be fixed.
* for a correctly signed zone, valid chains of trust with every
algorithm (that it uses) have to be present, as per RFCs.  The
information ought to be there.

Best regards,
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Fedora -