[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


To: Thomas Corte <Thomas.Corte@knipp.de>
CC: "Hollenbeck, Scott" <shollenbeck@verisign.com>, "'ietf-provreg@cafax.se'" <ietf-provreg@cafax.se>
From: Daniel Manley <dmanley@tucows.com>
Date: Mon, 24 Sep 2001 12:02:47 -0400
Sender: owner-ietf-provreg@cafax.se
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.4) Gecko/20010913
Subject: Re: Command Recovery


Thomas Corte wrote:

>Hello,
>
>On Thu, 20 Sep 2001, Daniel Manley wrote:
>
>>Thomas,
>>
>>I see your point but I think that it's not right to not enforce
>>uniqueness and then throw an error if the queried identifier is not
>>unique.  The server should probably enforce uniqueness right off the bat.
>>
>
>So the server would throw an error if it receives a different command
>with the same trid?
>This would be nice, but it has been argued that enforcing uniqueness
>at the server side could involve too expensive lookup operations for
>each command, especially for read-only commands which do not require the
>discussed recovery.
>
Yes, that's certainly a good point.  Then transaction objects would only 
be recorded for transform actions.  Uniqueness would not be enforced for 
query operations since they don't change the state of objects in the 
registry.

But given Scott's last reponse to this thread, it looks like this is 
turning out to be a "nice-to-have" vs. "expensive-to-implement" issue. 
 But would it be that big burden to enforce uniqueness?  Would it be 
fair to use the example of storing the transactions in a table with a 
unique index on registrar id and client trid?  DB's are usually pretty 
efficient at rejecting inserts in duplicate keys.  Or are implementation 
discussions out of scope in this discussion?






Home | Date list | Subject list