[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


To: Dave Crocker <dcrocker@brandenburg.com>
CC: "'ietf-provreg@cafax.se'" <ietf-provreg@cafax.se>
From: Daniel Manley <dmanley@tucows.com>
Date: Thu, 23 Aug 2001 14:21:59 -0400
Sender: owner-ietf-provreg@cafax.se
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.3) Gecko/20010808
Subject: Re: Message Pushing and TCP Transport

Dave Crocker wrote:

>
> You seemed to have missed my point, that such pinging is not a good 
> thing.   If the server wants to be shutting down inactive connections, 
> then the folks running the server are not going to appreciate your 
> circumventing their management policies.  They will then add 
> mechanisms for disabling the ping, in whatever form it is commonly 
> achieved. 

All the registries that my employer and I work with, who will remain 
nameless, encourage regular pinging to maintain connections.  Having to 
reconnect after a server cuts you off after a period of inactivity is a 
pain and can represent a momentary interruption in service to our 
clients.  Secure connections are relatively expensive to establish.

>> clientA issues a transfer for an object and places a message on an 
>> internal queue for clientB.
>
>
> Huh?  ClientA places a message on a queue for ClientB.  What ARE you 
> talking about? 

Sorry, it was a slip of the keyboard.  I was trying to say that the 
server would place a notification on the queue of clientB.

>
> You have a strang model of server and thread implementation.  The idea 
> that it requires internal polling, no matter what, is simply wrong. 

Hey, there's all kinds of schools of thought on server and thread 
implementations.  I personally don't find mine strange -- I'm building 
on my experience with servers that I've worked on in the past.

>> ... somewhere along the line, a process or thread will have to poll 
>> something to see if there's anything to do.
>
>
> wrong. 

You keep saying that I'm wrong without describing your alternative. 
 That's not very fair, is it?

>
>>> Actually, no.  Pushing has no wasted effort, unlike polling.  Hence 
>>> the overhead of push is only incurred when there is something to push.
>>
>>
>> Except for queuing when the message can't be delivered taken right away.
>
>
> queuing is an entirely separate bit of mechanism and overhead, with 
> equal overhead on top of push or pull. 

If you're going to push queued messages, then I don't see how it's an 
entirely separate mechanism.  Yes, with equal overhead, except that 
pulling is not done by the server, so the load is distributed a little 
more equally between client and server.

>
>
> the point that is being missed is polling permits only a subset of the 
> behaviors and efficiencies that push permits.  Push demonstrates 
> vastly better scaling properties. 

Can you enumerate these behaviours, efficiencies and scaling properties? 
 I see the same messages being passed either way so the argument is over 
the delivery method.

>
>
> Internet application protocols have intentionally chosen simplistic 
> transaction models, in the past.  That has been fine for small scale 
> activities.  As we try to permit serious, large-scale applications, we 
> need to use the kinds of mechanisms that modern, large-scale 
> transaction systems use.
>
> And have used for a very long time.  (Which is a semi-polite way of 
> suggesting that most of us need to realize that we are amateurs in the 
> field of transaction processing. Amateurs usually find the work of 
> professionals to be "more complicated" than we are used to...)
>
> (For reference, please note the "we".  That is not just for 
> politeness.  I very much include myself.  The difference, right now, 
> is that I have been trying to appreciate the nature of this other area 
> of expertise, rather than to automatically reject it because it makes 
> protocol work a bit more complicated.)

I've worked with Local Number Portability myself.  TMN, CMIP, GDMO and 
all that jazz.  Let me tell you, the toolkit we used was not a cakewalk. 
 I worked with a really bright crew, but only the system architects and 
the lead senior developers touched that code.  Cool stuff but what a 
headache.

But that was in a really controlled environment.  Only a handfull of big 
clients.  Provreg is defining a generic protocol which will have many 
application groups.  I don't think it would be fair to the smaller 
players to impose a heavy-duty protocol.

>
> The server needs to get the data to the client, no matter what.  You 
> seem to feel that sending it immediately, rather having to wait an 
> arbitrary amount of time, somehow incurs onerous overhead.  It doesn't. 

We seem to be going in circles here.  I'm saying there are two 
activities:  generating messages and delivering messages.  Pushing puts 
these two activities in the server, while polling splits them between 
the server and the client, respectively.  With the client pinging anyway 
(yes, an integral part of my argument), the total overhead is roughly 
the same, but balanced better between two parties.

>
> Frankly I can't figure out the processing model that you are 
> describing, since you seem to view queuing as intimately involved in 
> the i/o model, although it does not need to be. 

Then *please* present an alternative so I can understand your point of 
view better.

>> What happens if there is a persistent queue for a particular 
>> registrar? If there are a bunch of old (but not yet expired) transfer 
>> notifications waiting around and a notification of low balance shows 
>> up, which do you try to send the transfers first?  Do you implement a 
>> prioritized queue?  If you do, should we include the message 
>> priorities in the protocol spec?  Then registrars have to 
>> additionally deal with message priorities.
>
>
> None of this is relevant to the choice of mechanism that sends the 
> "next" entry from the queue. 

So you're saying that the queuing of a backlog of message must obey the 
order in which the messages were issued?  If that's true, then my 
concerns in the paragraph above disappear.

Dan



Home | Date list | Subject list