[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


To: Olafur Gudmundsson <ogud@ogud.com>
Cc: dnssec@cafax.se
From: Randy Bush <randy@psg.com>
Date: Wed, 04 Jul 2001 10:19:13 -0700
Delivery-Date: Thu Jul 5 12:17:08 2001
Sender: owner-dnssec@cafax.se
Subject: Re: ttl problems in DNSSEC

>>>>> [1]: perhaps the resolver is able to detect this situation by
>>>>> comparing the key tag field on S++2(A) with K++1, and then try get
>>>>> more recent data.
>>>> then you propose that if data is BAD an extra query for the key most be
>>>> done? I'm afraid this will yield too many extra queries...
>>> Not a problem this will only happen once in a long while, thus when this
>>> happens extra query is fine.
>> so you are saying if there is bad data it is okay to requery for stuff
>> you need? (Another key for instance)
> Yes if you discover another key is needed then you need to get it.

how often will this happen?  O(sig verify failures)?  is it a dos opening?

>> I think it is pretty easy for an attacker to fake bad data and thus
>> generate a huge amount of extra queries. Those queries must alwyas be
>> validated (as bad data isn't cached) thus generating an extra load on
>> the nameserver.
> I know, the simplest one is to toggle the key id's on the signatures.

i am slow this morning.  how does this bring re-fetches to some reasonable
number?

>> (is this the old DNSSEC doesn't protect you against DOS attacks? (but
>> makes them very easy to do))
> Exactly, that is why you should try to use TSIG/TKEY or TLS or IPSEC when
> frequently talking to nameservers. 

i.e. public key dnssec does not scale well so we go to shared secret schemes
with non-scalable, or unknown, key distribution schemes?  is something
broken here?

randy

Home | Date list | Subject list