SSH -R problem
Pierre A. Humblet
Tue Apr 30 07:08:00 GMT 2002
Corinna Vinschen wrote:
> That makes sense... but doesn't that again break something else?
What it might break is the case for which linger was added in the first
place, i.e. processes terminating and Windows flushing their outgoing
packet queue (in the case of slow connections), as opposed to Unix,
which maintains the queue for a while after process termination.
Now I have never observed this myself, and don't have a strong opinion.
Do we have a reproducible case to understand exactly what's going on?
I am not convinced by the "user space" argument in
We know too well that sockets consume system buffers.
At any rate the initial problem occurs only at process termination time.
The current issue is that we don't want to block processes in the prime of
So an ideal fix would detect "end of life" situations. Here is a brain
storming idea: on a Cygwin close(), do a shutdown(.,2), free the Cygwin
structure and start a task to do a blocking linger + closesocket() on the
Windows socket. At process termination, wait until all such tasks are done.
Exception: in the case of a blocking socket where the application had
set linger to On, do a Windows closesocket() immediately.
By the way, how does Unix behave when doing a close() on a non blocking
socket when linger is on? That seems contradictory, linger on is supposed
"SO_LINGER controls the action taken when unsent messages are queued on
socket and a close(2) is performed. If the socket promises reliable
delivery of data and SO_LINGER is set, the system will block the process
Short of something like this we are between a rock and a hard place.
I would think that applications that go to the trouble of setting the
socket to non-blocking care more about not blocking than about potentially
dropping packets at the end of their life.
More information about the Cygwin-patches