To:
Havard Eidnes <he@uninett.no>
Cc:
<Roy.Arends@nominum.com>, <randy@psg.com>, <GILBERT.R.LOOMIS@saic.com>, <dnssec@cafax.se>
From:
Roy Arends <Roy.Arends@nominum.com>
Date:
Mon, 1 Oct 2001 11:41:33 +0200 (CEST)
In-Reply-To:
<20011001.104432.26017630.he@uninett.no>
Sender:
owner-dnssec@cafax.se
Subject:
Re: CERTificates and public keys
On Mon, 1 Oct 2001, Havard Eidnes wrote: > > > I have never quite fathomed why some seem to have an ingrained fear > > > of adding more data to the DNS. > > > > Not the "more" fact, but the "different" fact could be problematic. > > Every unique single lookup starts at root. Consider the load of bogus > > that is already hitting the root-servers. Deployment of different data > > (other classes/new types) should be carefully considered in such that > > it under no circumstances breaks the scale. With scale I mean the > > current growing scale of new data of the same type that root could > > handle. > > Even though every unique single lookup starts at the root, that doesn't > mean the root name servers see the query, since in adherent-to-spec > recursive name servers the upper parts of the name tree will already be > cached. Surely the name server would quite seldom need to query one of > the root name servers to refresh it's information for those parts of the > name tree (as dictated by TTLs)? This is true. > Therefore, adding a new record type under the IN class should not cause > much additional load at the root, especially if you assume that other > record types will be queried from the same zone anyway, since the same > cached NS record chains would be used to look up the new record type as > is used for all the other record types in the IN class. This is true, in a perfect world. I'm also thinking about applications. I've seen resolvers and even applications that do their own resolving and simply do not cache (and work independent of whatever is configured at resolv.conf). Also thinking about DS that deliberatly causes a parent server to be queried against. Especially in the case of the gtld-servers where there is going to be a multitude of signed DS records. Thorough testing is needed before bluntly adding that to the specs. > I'm pretty sure that the bogus queries as seen at the root will neither > increase nor decrease simply by introducing a new record type, but would > rather suggest that this particular problem has other root causes. This is too vague for me. Any restart of a query, like NS MX CNAME will cause more load in general on the DNS-tree. Any record with a domain name in the RDATA part is susceptible of a query-restart, though not specificly on the root. Now, any qname that is outside the DNS-tree will end up at root. These apps/resolvers should/will be fixed, but meanwhile the load will end up at root. > Yes, adding a new class can introduce new and interesting scaling > issues. If I've understood correctly, adding a new class make it > possible to add a new set of root name servers, and the name space can > (in principle) be separate (though it would probably be a bad idea, if > the name space was "similar"). > > > > 1) the growth of the size of the data would all be at the edges > > > (authoritative servers) or felt at the edges (recursive servers), > > > where resources can relatively easily be scaled up to handle the > > > added demand. > > > > Growth of the size of data is felt at the root first. If it can be scaled > > at root, the branches should have no problem. > > I don't follow you. Please explain why adding a new record type in the > IN class will automatically increase the load at the root. No, the 1) was not about adding types alone, it is about adding more data in general. Adding data will simply increase load. I think no-one will argue with that (the root-server load has simply gone up since DNS became operational). Remember the microsoft incident (Januari 23rd, 2001) where an ill-configured router cut their nameservers from the net ? That increased the load significantly at root-servers/gtld-servers. This has nothing to do with "adding more data", but is simply to illustrate how simple the load goes up. Another case is when BIND started to be more conservative about caching out-of-domain data, the query rate at the top went up. I'm trying to say that whatever is changed/added/deleted to the specs has to be well thought through about what will happen at the root, if the outcome is not sure, simply test it. Note that I said "could be problematic" and "carefully considered". That is different then "will cause problems" and "don't do it". All I'm saying is to consider the consequences for stability of the DNS-tree. > Assume that clients which query for the new record type usually fetch > other record types from the same zones as well (not a wholly > unreasonable assumption, I would think). Yes, but the usual case is not what I'm concerned about. Regards, Roy Arends Nominum ------------- 0-14-023750-X dcrpt ths 43.0D.01 01.05.0C 84.18.03 8A.13.04 2D.0B.0A