[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


To: Keith Moore <moore@cs.utk.edu>
cc: ngtrans@sunroof.eng.sun.com, namedroppers@ops.ietf.org, ipng@sunroof.eng.sun.com, dnsop@cafax.se
From: Robert Elz <kre@munnari.OZ.AU>
Date: Thu, 09 Aug 2001 02:28:57 +0700
In-Reply-To: <200108081854.OAA15722@astro.cs.utk.edu>
Sender: owner-dnsop@cafax.se
Subject: Re: (ngtrans) Joint DNSEXT & NGTRANS summary

    Date:        Wed, 08 Aug 2001 14:54:32 -0400
    From:        Keith Moore <moore@cs.utk.edu>
    Message-ID:  <200108081854.OAA15722@astro.cs.utk.edu>

  | why?  nothing in the standards requires that all IP addresses be listed,

Again, it depends upon exactly what you're measuring here.   The DNS
defines the name, and if it has A (or AAAA or A6) RR's, then those define
the addresses that belong to that name.

That someone might know of some other address that happens to reach
the same piece of hardware isn't really relevant - even if the other
address happens to work just as well.

  | there's also no reason that you couldn't use a completely
  | different mechanism than DNS to determine the IP address associated with
  | a service.  (we will certainly need to do this someday, as DNS cannot 
  | last forever)

Perhaps.   When that happens then we'll have two ways to define the
mapping between names and addresses, and when they're not the same,
we'll have a mess (unless we also design a whole new naming scheme to
go with it, which is a rat hole I think I'll avoid in this discussion).

  | if hosts are notified when their address prefixes change, that notification 
  | can be forwarded to any connections (TCP or otherwise) that the host knows 
  | about.

Yes, at the TCP level, things are easy (well, could be, assuming
adequate protocol support, and "easy" is probably the wrong term
even then, "possible" would be better).

It is everything else that causes the problems.   Long lived UDP
NFS mounts are one to really worry about.   With v6 they're almost
all going to be able to use site local addressing, and so be immune,
but they still show the problem - the server explicitly has no knowledge
of who is connected, and it can be weeks between packets from clients
to servers if nothing is happening...

If the client just validates the name it was given whenever its address
has expired, then it can pick up on address changes, and use the new
address (the server doesn't care what address requests come to).
Client renumberings in this scenario don't matter, as the server doesn't
send unsolicited packets - only replies (so as long as the address remains
valid for a RTT, that's enough...)

Each protocol really needs to be investigated separately, what might
work, and what mightn't will vary enormously.   Which is another reason
why attempting to fix it at the IP layer probably isn't the solution.

  | similarly, if those hosts can notify resident applications that the 
  | addresses prefixes have changed, those applications can at least potentially
  | notify their peers.

Only if the end that is renumbering has any idea who its peers are.
That isn't always the case.

  | this doesn't solve the problem entirely, since more than one host 
  | in a conversation could be concurrently changing addresses.

yes, that part is the real hard one to handle - but if we can't get it
right without that case, worrying about that one is a waste of energy.

  | but there is also probably a practical limit to the number 
  | of address prefixes that should be used at any one time.

Yes, but thanks to virtual web hosting (the old way) it isn't a limit
we're going to reach with almost any renumbering frequency that is
practical by any means...

  | well, we appear to have some people assuming that renumberings will be
  | so infrequent that we can disregard them, and others assuming that 
  | renumberings will be so frequent that protocols have to deal with them
  | explicitly.  that's no way to do engineering.  for applications to
  | make use of IPv6 there needs to be some reasonable bounds on tbe 
  | behavior of the network.

Yes.   Unfortunately, the very disagreement that you mention means that
deciding what those bounds should be isn't going to be easy.

Personally, I'm also not sure I need to know - I want to make it possible
to renumber easily, which should make it possible to do frequently if needed,
and then hope I almost never have to actually do it...

That is, I want to be able to deal with the worst case, but will hope that
the best case is what occurs.   If the hopes fade, I will at least know that
I'm not sunk - if they're realised, then everyone is happy.

  | I don't think that's the right criterion - first of all, that would have 
  | 50% of all connections being broken;

A factor of 2 or whatever isn't going to make any marked difference.

  | second, it's based on assumptions
  | about the nature of traffic, which will vary widely over time. 

Sure.   Though we only care about the traffic that is likely to get
caught in this net - anything too short lived, or whose characteristics
make broken connections essentially irrelevant (ie: will retry anyway)
can be ignored.   We only need to include the stuff that wants to live
a long life.

  | I'd rather say that we have expectations about how the network behaves -
  | just as (say) end-to-end packet loss should be no worse than 20%,

Hmm... if it gets to double figures I give up for the day...

But I don't think we've ever had such a definition, what packet loss
is tolerable also depends heavily upon the applications.

  | address bindings should be good for at least several days.

Would be a nice target.  Just as long as we don't actually assume that
we will be able to meet it.

kre



Home | Date list | Subject list