[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


To: Roy Arends <roy@dnss.ec>
cc: dnssec@cafax.se
From: Paul Wouters <paul@xelerance.com>
Date: Mon, 21 Jun 2004 16:14:35 +0200 (MET DST)
In-Reply-To: <Pine.BSO.4.56.0406211242590.31719@trinitario.schlyter.se>
Sender: owner-dnssec@cafax.se
Subject: Re: continued: rrsig(qtype)

On Mon, 21 Jun 2004, Roy Arends wrote:

> I have concerns too about this, and I don't want to handwaive it, but
> other mechanisms have online keys as well: ssh/tls/ipsec. Though they are
> used in a different context with different security models on boxes which
> are probably a lot less visible than an authoritative nameserver, they
> have keys online.

Those keys only belong to the vulnerable host itself. A hacked authorative
nameserver with dnssec keys compromises one (or a whole lot!) of domains.
You cannot compare the two.
 
> Yes, due to client/server inbalance there is a higher load on server
> resources compared to clients, then with DNSSEC-bis.

And the only reason for this is the supposed secret/patent/trademark state of
the .com zone? I wish the CC:TLD's would just go on without Verisign.
 
> Otoh, zone-size will be less, search algorithms will be faster, and
> crypto-accellerators exist.

That won't help against a ddos attack though. It will always be cheaper to
send the queries then to answer them.

> performance as well. Besides, the dns is scalable in width, where a
> domain can have X servers behind Y ip-address behind Z names, to cope with
> perceived load increase.

Adding X KEYs and SIGs for every authorative nameservers.........

> > Seperating the signing from the serving of data was in my view an essential
> > part of dnssec security.
> 
> Then please explain why. I've seen this type of statement several times
> and it belonged to the two groups of concerns: crypto-DoS and key-leak.
> These statements have been used in the past, and I'm trying to relate that
> to other fields, current-practise, and best practises.

Because of the path of trust relationships between servers. You want
a severe limitation on the acess to modify data, yet you want to be
liberal in access to signed (therefor more or less readonly) data.

For instance, the Openswan master CVS repository is severely limited, and you
will need IPsec to be able to even reach it before you can try to authenticate.
But for pulling the CVS repository, any IP can just anonymously retrieve it.
This functionality is best seperated clearly with seperate servers as well. 
The master has permissions to upload the changes to the public repository, but
the public repository should have no access whatsoever to the master, since it
is much more vulnerable then the master, *because* it is offering a public
service.

The same is true for a nameserver. A nameserver is vulnerable *because* it is
a nameserver. Ideally it should not have the access rights to change the zone
data it is serving. We never had the possibility with regular dns to seperate
this, but DNSSEC gave us the possibility to do so. In my (non-TLD) opinion,
that is much more important then trying to protect data that is publicly
published.

> Mandatory ? This is recommended, not mandatory. Zones decrease in size.
> Negative responses decrease in size. Positive responses will include
> DNSKEY RRset when there is space. Due to caching, absence of DNSKEY in a
> DNS message does not automatically imply a requery specifically for
> DNSKEY. If all else fail, limit the number of 'denial-keys' to 1.

To me, the security of seperatnig signing and serving is worth more then the
reduced traffic. DNS makes up about 0% of my traffic (or an endusers traffic)
 
> I'm talking about a referral to an unsigned zone. That is, a subzone of a
> zone I am authoritative for. I'm also authoritative for the DS record, or
> the absence of the DS record. With DNSSEC records, denail of DS is done by
> providing NSEC for that delegation and check if the DS bit is clear. With
> rrsig(qtype) it is a matter of denying DS, and since you know beforehand
> what delegations are not secured, it is trivial to presign denial of DS.
> My apologies if the provided text are not that clear, it was a braindump,
> not a draft :)

Ahh. okay.
 
> > DNS data is public, even the .com zone. Don't contaminate the protocol
> > to hide public data.
> 
> What public data am I hiding with this proposal ? I am obscuring nothing.

I am talking about people who want to prevent NXT walking.
 
> The problem with this discussion is that folk seem to forget that DNS was
> designed as a lookup service, not a search service. It is silly to deny
> that NSEC chains cause this enumeration side effect. A side-effect that
> was _not_ there in DNS.

Sure. but

1) Is this worth all the extra drafts and rounds and IETFs and the above
   mentioned security implications?
2) Go ratelimit your queries if you are concerned about walking.
3) Go publish the list of domains in TLD's and people will stop walking the zones.

NXT walking is a political problem, not a technical one. Do we really want to
wait deploying DNSSEC until some major distributed dns pollution scam happens?

> As far as 'public' is concerned: the DNS _service_ is a public service to
> translate data given preknowledge.
> 
> This is why it is trivial to find my bank-account-number given a
> name,address and bank, but no bank will give you all account numbers of
> everybody registered to them.

They are not refusing for technical reasons, but commercial reasons. They don't
want information about their size (or customers) to leak out. However, the TLD's
are a non-profit and should now have these considerations to worry about.
 
> This is why folks would like to restrict whois as well. A lookup in whois
> is trivial, but to dump the whole database is hard. For A Very Good
> Reason.

With 4 year olds having 350.000 zombie machines available, "hard" gets a new
meaning. What is the problem of harvested whois data? The most common issue in
the past was "spam". Well, with the dictionary based spam attacks out there 
this has become irrelevant.
 
> It is mere arrogance when folk blurt "if this enumeration thing is a
> creating a problem with whois, fix whois". It would be like a car-dealer
> saying "if this 30 feet wide truck gives a problem with roads, fix the
> roads. The problem is not whois. The problem is NSEC. I can see a whole
> range of issues regarding enumeration, not just whois. Given an entirely
> secured tree (hey, MARID would love that), I now have a way to enumerate
> EVERY existing MX record, see which is open for relay, or, as an endpoint
> using a dictionary of common names, and autospam world. There is no end to
> what a database of names can be (ab)used for. I'm sure you can think of a
> lot more then I can.

So you are saying NXT walking is a problem because people will use faults in
the WHOIS and faults in the ESMTP to spam. The problem of spam is caused by
ESMTP, and not by whois or DNSSEC. 

> Enumeration is a side-effect introduced by NSEC. If that is a problem,
> NSEC should be fixed. That is what I'm doing. It does not hurt current
> deployment, it helps folk who want to deploy DNSSEC in the future.

Anything not getting DNSSEC into an RFC is a problem. Postponing DNSSEC for
another year by fixing this corner case of abuse is going to give me a LOT
more spam then the amount of spam I would get when DNSSEC is deployed and
people can start using things like SPF records.

You know, the Catholic Chucrch had similar problems in the past electing a new
Pope. the process would take weeks, months or even years. They invented the 
"Conclave", a protocol still in use today, whereby all Cardinals involved in the 
election process are locked up in the Sistene Chapel, and they can only come out 
when they have elected a new Pope. Sometimes I think the IETF should adopt this 
process. Perhaps I should write an RFC for this :P

Paul


Home | Date list | Subject list