[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


To: Pekka Savola <pekkas@netcore.fi>
Cc: dnsop@cafax.se
From: Johan Ihren <johani@autonomica.se>
Date: 28 Feb 2002 17:59:59 +0100
In-Reply-To: <Pine.LNX.4.44.0202271444100.6578-100000@netcore.fi>
Sender: owner-dnsop@cafax.se
User-Agent: Gnus/5.0808 (Gnus v5.8.8) Emacs/20.3
Subject: Re: I-D ACTION:draft-ietf-dnsop-v6-name-space-fragmentation-00.txt

Pekka Savola <pekkas@netcore.fi> writes:

Hi Pekka,

> On 25 Feb 2002, Johan Ihren wrote:
> > > On 9 Feb 2002, Johan Ihren wrote:
> > > > > 2. Introduction to the problem of name space fragmentation
> > > > > 
> > > > >    With all DNS data only available over IPv4 transport everything is
> > > > >    simple. IPv4 resolvers can use the intended mechanism of following
> > > > >    referrals from the root and down while IPv6 resolvers have to work
> > > > >    through a "translator", i.e. they have to use a second name server
> > > > >    on a so-called "dual stack" host as a "forwarder" since they cannot
> > > > >    access the DNS data directly. This is not a scalable solution.    
> > > > > 
> > > > > ==> The last sentence is completely false; the truth is exactly the
> > > > > opposite.  This makes an assumption that there would be only a few
> > > > > "forwarding" servers: IMO, _every ISP_ providing IPv6-only service should
> > > > > provide this capability.  This is very scalable.
> > > > 
> > > > Well, that's not the entire problem. "Forwarding" is something you
> > > > configure statically to get help from someone else with resolution.
> > > > Because of this (and other reasons) you configure your forwarder as an
> > > > IP address, not a name.
> > > > 
> > > > Over time we expect to have hundreds of thousands (or more) of caching
> > > > resolvers on v6 transport. Given a traditional forwarding solution all
> > > > of them will then need statically configured v6 addresses to the ISP
> > > > forwarding services. This will have to be maintained over a very long
> > > > term (tens of years), with ISPs coming and going, ISPs having to
> > > > restructure their services (i.e. moving the forwarders), etc, etc. 
> > > 
> > > This seems to be no different from having to renumber your site; if you 
> > > have to renumber, I suspect changing this also is not all that fatal thing 
> > > to do.
> 
> You didn't comment on this one except with one line break, so I guess you 
> forgot to do so.

Yes. Sorry. Can we let this one rest with just comparing it with the
taxman?  "This new tax is just a few more percent of your income.
Surely that will not be a problem for you to pay?"

I want see things without obvious future headaches while you seem to
argue that with one headache already raging, another one will not hurt
much. I think that switching away from A6/DNAME will increase future
pain when trying to solve the renumbering problem, but just because
that particular stone is already rolling we shouldn't just add to the
collective pain.

However, *if* the anycast trick turns out to work for the case v6
client --> v4 server, then this particular problem shrinks to
manageble size again.

> > > > Deploying a few forwarders to be used by a few caching resolvers for a
> > > > few months is easy. Deploying massive numbers of forwarders to be used
> > > > by even larger numbers of clients for tens of years *with no location
> > > > mechanism* is definitely a problem.
> > > > 
> > > > Since it becomes a problem as we scale it up I see it as a scalability
> > > > problem. That is not to say that it is a *performance* problem. It is
> > > > a *maintenance* problem that does not scale.
> > > 
> > > Nothing prevents defining e.g. IPv6 anycast addresses to be (also) used
> > > for this specific purpose (or well known addresses, in IPv4 anycast
> > > -style), to be deployed everywhere in the world.  This avoids the
> > > maintenance problem.
> > 
> > This only works in one direction: v6 client to v4 server. But I
> > completely agree with you that given some sort of DNS transport
> > bridging service, locating the bridges on v6 anycast addresses is
> > probably the best solution to that sub-problem. And as you may know
> > there is a draft by Alain Durand proposing exactly that mechanism.
> 
> I was not aware of that, but after checking, I found the draft and will 
> read it at some point.
>  
> > Then remains the question of whether we bridge should be a "forwarder"
> > (i.e. recursive, has to start over from the root, even if bridgning is
> > only needed for the last mile) or a "proxy" (non-recursive) and of
> > course the real headache (v4 client to v6 server).
> 
> Note that the classification problem would go away if we could assume the
> server would also be used for other lookups, it should always have cached
> data for all but the last mile.

I think there's a teeny weeny bit too much of .com to cache all of it
these days... and then you're back to exactly the same recursive
lookup as starting over implies, since statistically no one has to hit
the roots for a normal lookup anyhow.

> v4 client to v6 server is of course still a problem if you assume:
>  - v4 client performs queries beginning from the root, not via a forwarder
>  - v4 client would very much like to get DNS data from v6-only zones
>    (I think the chances are very very low it'd find anything there.)
>  - etc., as was mentioned below.
> 
> I think we should consider the sub-problems separately:
>  1) v6-only -> v4-only
>  2) v4-only -> v6-only

I assume that you mean that the first is "v6 client --> v4 server" and
the second is "v4 client --> v6 server".

> 1) is, I think, the real issue at hand here, and MUST be solved somehow, 
> and rather quickly.  A few mechanisms work, some more operational than 
> others.

1) is at least possible to solve, through the suggested anycast
   method. However, not everyone agrees that this is the real issue.

> 2) is a bit more academic exercise, and if an easy solution won't
> manifest itself, I think it should be handled by e.g. publishing a
> BCP "don't do that if you have any v4 records in your zone or your
> subzones".  Also, I don't think this MUST be solved today.. it would
> be nice to, but the focus should be on 1).

If you call this an "academic exercise" in Randys presence I cannot
take responsibility for the consequences ;-) But really, I do *not*
think this is an academic issue. This is a major issue, because we're
about to start destroying (in the sense of "eroding") the Internet
namespace, as seen from a v4 client.

For many years to come v4 clients will constitute the bulk of the
Internet. And destroying their namespace is not a thing to be taken
lightly. They may get annoyed at you ;-)

> but I think we've discussed most of this already.

Yes.

> > Furthermore, the focus of the document is not the people "who care"
> > (and have resources). They do not usually have a problem, since they
> > in this case will simply ensure that their zones are reachable of both
> > transports. Foucus is rather on the people who either do not have
> > sufficient insight or not sufficient infrastructure to easily keep
> > their zones available over v4 transport. 
> 
> One could argue they don't very probably have any v4 records anyway.

And what does that matter? We're not talking about reachability of
some app service, we're talking about maintaining the DNS
namespace. 

Let's assume that you're v6 only, couldn't care less about v4
etc. Then you delegate a subdomain to me. And I do care about v4. I do
make my webserver available over v4 transport. Do you see the problem?

> [...]
> > I've seen estimates that there are more than one million caching
> > resolvers on the public Internet today and reconfigure all of them
> > (even if we could find them, which we cannot) to utilize a forwarding
> > scheme is not trivial. It is in all likelihood not even possible. 
> 
> The number of v4-only caching resolvers in "10 years" (the time when v4 
> removal could even be considered) would be much, much smaller.

I think it is a bit premature to start discussing v4 removal. On the
contrary I think that a common mistake is to consider this to be a
migration from v4 to v6. A better view is as a migration from v4 to
(v4+v6).

> > > > > 5. Overview of suggested transition method.
> > > > > 
> > > > >    By following the steps outlined below it will be possible to
> > > > >    transition without outages or lack of service.
> > > > > 
> > > > > ==> this seems to destroy the credibility of this "comparison".
> > > > 
> > > > Please elaborate.
> > > 
> > > If you wish to compare two mechanisms, you should not slip in remarks that 
> > > might be viewed as propaganda against the other mechanism.
> > 
> > That is obvious. What is not obvious is that there is some sort of
> > comparison going on here. I cannot even find the word "comparison"
> > (that you have put within quotes) in the draft.
> 
> You introduce two alternative approaches and explore their seemed 
> advantages/drawbacks.  I'd call that a comparison.

I do? Wow, I had no idea. Really, either I am really deeply asleep at
the wheel or you are reading the wrong document (my guess would
honestly be the former). We simply do not communicate here, so, since
this is probably not a really central part of the discussion I suggest
we either synchronize our watches offline or simply drop this part.

Regards,

Johan


Home | Date list | Subject list