cygrunsrv + sshd + rsync = 20 times too slow -- throttled?

Ken Brown kbrown@cornell.edu
Wed Sep 1 12:56:05 GMT 2021


On 9/1/2021 4:46 AM, Corinna Vinschen wrote:
> On Sep  1 17:23, Takashi Yano wrote:
>> On Wed, 1 Sep 2021 10:07:48 +0200
>> Corinna Vinschen wrote:
>>> On Sep  1 09:16, Takashi Yano wrote:
>>>> On Wed, 1 Sep 2021 08:02:20 +0900
>>>> Takashi Yano wrote:
>>>>> On Tue, 31 Aug 2021 17:50:14 +0200
>>>>> Corinna Vinschen wrote:
>>>>>> So for the time being I suggest the below patch on top of topic/pipe.
>>>>>> It contains everything we discussed so far.
>>>>>
>>>>> One more thing. 'git log' cannot stop normally with 'q' with your patch.
>>>>
>>>> The same happes with 'yes |less'.
>>>>
>>>> The cause is that write side cannot detect closing read side because
>>>> query_hdl (read handle) is still opened.
>>>
>>> Oh
>>>
>>> my
>>>
>>> god.
>>>
>>>
>>> That kills the entire idea of keeping the read handle :(
>>
>> One idea is:
>>
>> Count read handle and write handle opned using NtQueryObject().
>> If the numbers of opened handle are equal each other, only
>> the write side (pair of write handle and query_hdl) is alive.
>> In this case, write() returns error.
>> If read side is alive, number of read handles is greater than
>> number of write handles.
> 
> Interesting idea.  But where do you do the count?  The event object
> will not get signalled, so WFMO will not return when performing a
> blocking write.

What if we create an event that we signal every time a reader closes, and we add 
that to the events that WFMO is waiting for?

If this doesn't work for some reason, a different (but more complicated) idea is 
to keep a count of the number of open readers in shared memory.  When this is 0, 
write returns an error.  I'm thinking of shared memory as in topic/af_unix 
(which I copied in the fifo implementation), but maybe something simpler would 
work since we only have a single variable to keep track of.

Ken



More information about the Cygwin-developers mailing list