Bug 424 - TCP FIN notification callback needed
TCP FIN notification callback needed
Status: RESOLVED FIXED
Product: ns-3
Classification: Unclassified
Component: network
pre-release
All All
: P1 blocker
Assigned To: George Riley
:
Depends on:
Blocks: 697
  Show dependency treegraph
 
Reported: 2008-11-28 04:02 UTC by Mathieu Lacage
Modified: 2009-10-07 23:12 UTC (History)
5 users (show)

See Also:


Attachments
Correct broken close semantics/implementation (14.75 KB, patch)
2009-09-30 13:11 UTC, George Riley
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Mathieu Lacage 2008-11-28 04:02:49 UTC
I think that we lack a socket callback to be notified when the underlying socket stream is closed. i.e., if I have a socket and I send data on this socket, and the remote side closes the socket before I am done sending, I want to be notified when the socket receives the FIN.
Comment 1 Rajib Bhattacharjea 2009-03-14 14:41:02 UTC
http://code.nsnam.org/raj/ns-3-tcp/rev/83ccf9c5afa8

Does this do the trick?
Comment 2 Gustavo J. A. M. Carneiro 2009-03-18 06:19:35 UTC
The inline docs say:

"""Sets the "close" callback, which is a notification
that the other end of the connection has sent a message
indicating that it will not send any more data, i.e.
the socket is closed for reading."""

While this may be true, I think it's even more important to note that, when the close callback is invoked we have a guaranteed that all data _we transmitted_ has been flushed and acknowledged by the receiver.

Otherwise looks good.
Comment 3 Tom Henderson 2009-03-18 09:32:22 UTC
(In reply to comment #1)
> http://code.nsnam.org/raj/ns-3-tcp/rev/83ccf9c5afa8
> 
> Does this do the trick?
> 

I do not think the implementation of this yields desired semantics.  I think we want to be able to provide a signal that the application has read through the received data up to a received FIN segment.

from man 3 recv:

"Upon successful completion, recv() shall return the length of the message in bytes. If no messages are available to be received and the peer has performed an orderly shutdown, recv() shall return 0. Otherwise, -1 shall be returned and errno set to indicate the error. "

If you call this when the FIN is received, but you do not first make sure that the application has read all of the locally received data, it is not equivalent to a recv() that returns 0.  FIN may also be received out of order.

So, if we agree that we want to support the above recv() semantics (it is not clear from the initial bug report), I would suggest to change it and change the doxygen to say that it is supposed to align with the case that a synchronous socket read() returns zero.
Comment 4 Tom Henderson 2009-03-18 09:40:13 UTC
(In reply to comment #2)
> The inline docs say:
> 
> """Sets the "close" callback, which is a notification
> that the other end of the connection has sent a message
> indicating that it will not send any more data, i.e.
> the socket is closed for reading."""
> 
> While this may be true, I think it's even more important to note that, when the
> close callback is invoked we have a guaranteed that all data _we transmitted_
> has been flushed and acknowledged by the receiver.

We would need to be notified in this case of an ack of our FIN, or detection of RSTs, which is not what this is doing.

It would help to clarify what we want to model; I can't really tell from the initial bug report.  
Comment 5 Tom Henderson 2009-03-18 10:04:55 UTC
(In reply to comment #2)
> The inline docs say:
> 
> """Sets the "close" callback, which is a notification
> that the other end of the connection has sent a message
> indicating that it will not send any more data, i.e.
> the socket is closed for reading."""
> 
> While this may be true, I think it's even more important to note that, when the
> close callback is invoked we have a guaranteed that all data _we transmitted_
> has been flushed and acknowledged by the receiver.

If we want to be able to convey that all data we transmitted has been successfully sent, we could do two things that should cover most cases:

- provide some equivalent to SO_LINGER socket option, with a callback for notifying the application that close() would have returned, for apps that use close() after write()
- suggest to users the following socket call sequence:

Set the close callback
when done writing, call ShutdownSend()
wait for NotifyClose()
then call Close()

as in: http://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable

we should also make sure that there is a means to reliably detect when the peer has sent a RST


Comment 6 Rajib Bhattacharjea 2009-03-30 01:17:27 UTC
From the article Tom posted:
What we can do however is issue a shutdown(), which will lead to a FIN packet being sent to program B. Program B in turn will close down its socket, and we can detect this from program A: a subsequent read() will return 0.


This scenario (of detecting EOF when reading from the socket) is supported by the current TCP implementation; the packet you get in the RecvCallback will have Size=0 only when the read reaches EOF, which is when the read hits the data that had the FIN on it originally.

So the question becomes making sure Shutdown's semantics are correct?  Accordingly, Tom's suggestion for how to close a socket becomes:

when done writing, call ShutdownSend()
wait for RecvCallback to give an empty packet; then you know the other end has sent a FIN too;
call close() (which might be meaningless in ns-3, since in *NIX, calling it at this point it tantamount to kernel resource cleanup, it shouldn't have any network effect at all)
Comment 7 Tom Henderson 2009-03-30 01:41:02 UTC
(In reply to comment #1)
> http://code.nsnam.org/raj/ns-3-tcp/rev/83ccf9c5afa8
> 
> Does this do the trick?
> 

(In reply to comment #6)
> From the article Tom posted:
> What we can do however is issue a shutdown(), which will lead to a FIN packet
> being sent to program B. Program B in turn will close down its socket, and we
> can detect this from program A: a subsequent read() will return 0.
> 
> 
> This scenario (of detecting EOF when reading from the socket) is supported by
> the current TCP implementation; the packet you get in the RecvCallback will
> have Size=0 only when the read reaches EOF, which is when the read hits the
> data that had the FIN on it originally.

Raj, it seems you are saying that the originally reported bug (callback needed) is invalid since the read of length zero suffices?  What is the immediate plan for this bug?  
Comment 8 Rajib Bhattacharjea 2009-03-30 01:54:15 UTC
It seems invalid to me.  For refactoring close/shutdown (which is closely tied to the ideas in this bug) see the patch for bug 426.
Comment 9 Tom Henderson 2009-06-29 01:26:59 UTC
I think there are two problems identified in the comments below.

Problem 1) if the peer closes and sends a RST, then the sender should be able to detect this.  In a normal implementation, recv () would return -1 and errno would be set to ECONNRESET.  In ns-3, using the Ptr<Packet>-based API for receive, there is no way to convey this presently.  Using the byte-based recv() API that returns an int, there still is not a way to convey this because that method simply calls the Ptr<Packet>-based method.

A couple of options to fix the above API problem:
1) modify existing NotifyDataRecv() callback in Socket base class as follows:
- void SetRecvCallback (Callback<void, Ptr<Socket> >);
+ void SetRecvCallback (Callback<void, Ptr<Socket>, int >);
where the int can be used to signal "-1" to the function handling recv callbacks
2) add a new callback such as RecvError callback that is invoked whenever read would have returned -1 in a sockets implementation
  
I think that this bug could be renamed:  TCP RST notification needed

Problem 2) People seem to want a callback that mimics a blocking close with SO_LINGER semantics.

For this, I think this is not a bug but a feature request to add SO_LINGER, and some kind of a CloseCallback that signals when the system would have either returned from a lingering close or timed out.  For this, a new bug could be opened.
Comment 10 George Riley 2009-09-30 13:11:41 UTC
Created attachment 610 [details]
Correct broken close semantics/implementation

Fixes several problems with TCP close semantics.  Appears to fix bugs
326, 559, 663, and 664 also.
Comment 11 Tom Henderson 2009-10-07 00:16:32 UTC
(In reply to comment #10)
> Created an attachment (id=610) [details]
> Correct broken close semantics/implementation
> 
> Fixes several problems with TCP close semantics.  Appears to fix bugs
> 326, 559, 663, and 664 also.
> 

I think that the newly proposed socket API is necessary.  Because it isn't plumbed in for NSC yet, and because it is too close to code freeze, I suggest to split this patch into two patches; one dealing with TCP internal closing fixes (that also fixes other TCP bugs) and one dealing with the two new socket callbacks, and keep this open and at P2, and that we resolve this in the next month including for NSC.

One question I have concerns when NotifyNormalClose should be called.  The proposed patch calls it when the TCP receiver has received the FIN in sequence (or plugs all remaining sequence gaps) but is not tied to whether the application has read through to EOF.  Another possible implementation would call it only after the application has read through to EOF.  
Comment 12 Josh Pelkey 2009-10-07 23:12:46 UTC
changeset c54b36fb7317