This is the mail archive of the
mailing list for the Cygwin project.
Re: inetutils 1.5 / ftpd problem: 426 Data connection: No buffer space available.
- From: Corinna Vinschen <corinna-cygwin at cygwin dot com>
- To: cygwin at cygwin dot com
- Date: Wed, 7 May 2008 13:30:13 +0200
- Subject: Re: inetutils 1.5 / ftpd problem: 426 Data connection: No buffer space available.
- References: <email@example.com> <48180A91.firstname.lastname@example.org> <20080430085035.GM23852@calimero.vinschen.de> <48195BCB.email@example.com>
- Reply-to: cygwin at cygwin dot com
On May 1 01:57, Charles Wilson wrote:
> Corinna Vinschen wrote:
>> On Apr 30 01:58, Charles Wilson wrote:
>>> If [disabling mmap] *does* fix the problem, it may point to an issue with
>>> cygwin-1.5's mmap implementation, or with XP's handling of the underlying
>>> NtCreateSection()...mmap is not supposed to be CPU-intensive.
>> There might be a bug lurking somewhere. Could you create a very simple
>> testcase which basically behaves like ftpd for debugging?
> gcc -o server server.c
> gcc -o client client.c
> Add an entry to /etc/services on both machines, like:
> example 22725/tcp
> or you could edit the two files and use a hardcoded port number, instead of
> a service name and getservbyname()
> And don't forget to open a hole in your server machine's firewall for that
> On the server machine, invoke as:
> $ server <filename>
> This file is the one that will be transferred to the client. This works for
> $ dd if=/dev/urandom of=ReallyBigFile bs=1M count=250
> On the client:
> $ client <hostname_of_server> <filename>
> <filename> is where the client will save the transferred data.
> server is a traditional daemon, which forks off a copy to handle each new
> connection. That copy is the one you want to debug/strace/whatever.
> With this pair of programs, I saw "sane" memory usage in all cases when NOT
> using mmap, and I saw "insane" memory usage for all mmap cases except when
> blocksize was 1k.
> To switch among the various cases, edit the server.c file to #define/#undef
> HAVE_MMAP, and change the value of LARGE_TRANSFER_BLOCKSIZE.
IIUC, the testcase should exhibit the problem OOTB. HAVE_MMAP is
defined and LARGE_TRANSFER_BLOCKSIZE is set to 32K. I did what you
wrote above, I built server and client, added the example port to
/etc/services, created the ReallyBigFile from /dev/urandom as above...
However, I can't reproduce any ill effect. This testcase mmap's the
file exactly once and then calls as many 32K write's as necessary to
write the whole file. I don't see any waste of memory at all.
When examining the memory usage with Task Manager or, better, with
sysinternal's Process Explorer, you'll see how the memory usage goes up
over time. But that's no problem. What you see is the mapping of the
file into the physical memory of the machine. With each write, the
process accesses another 32K bytes of the file mapping, so the OS has to
map another 32K of the file into the process memory. Actually this is
done in 64K chunks, but that doesn't matter here. What you see is quite
normal behaviour and has nothing to do with Cygwin's mmap implementation,
> Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
> Problem reports: http://cygwin.com/problems.html
> Documentation: http://cygwin.com/docs.html
> FAQ: http://cygwin.com/faq/
Corinna Vinschen Please, send mails regarding Cygwin to
Cygwin Project Co-Leader cygwin AT cygwin DOT com
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Problem reports: http://cygwin.com/problems.html