Sv: Sv: Limit for number of child processes

sten.kristian.ivarsson@gmail.com sten.kristian.ivarsson@gmail.com
Fri Aug 28 08:38:55 GMT 2020


> > Hi Corinna
> >
> >>> Dear cygwin folks
> >>>
> >>> It seems like there's a limit of the number of possible child
> >>> processes defined to 256 with 'NPROCS' in
> >>> //winsup/cygwin/child_info.h used in 'cprocs' in
> >>> //winsup/cygwin/sigproc.cc
> >>>
> >>> 256 is quite few possible children in an enterprise environment and
> >>> perhaps the limit should be limited by the physical resources or
> >> possibly Windows ?
> >>
> >> The info has to be kept available in the process itself so we need
> >> this array of NPROCS * sizeof (pinfo).
> >>
> >> Of course, there's no reason to use a static array, the code could
> >> just as well use a dynamically allocated array or a linked list.
> >> It's just not the way it is right now and would need a patch or
> rewrite.
> >>
> >> As for the static array, sizeof pinfo is 64, so the current size of
> >> the array is just 16K.  We could easily bump it to 64K with NPROCS
> >> raised to
> >> 1024 for the next Cygwin release, at least on 64 bit.
> >> I don't think we should raise this limit for 32 bit Cygwin, which is
> >> kind of EOL anyway, given the massive restrictions.
> >
> > I don't know the exact purpose of this and how the cprocs is used, but
> > I'd prefer something totally dynamic 7 days out of 7 or otherwise
> > another limit would just bite you in the ass some other day instead
> > ;-)
> >
> > A linked list could be used if you wanna optimize (dynamic) memory
> > usage but an (amortized) array would probably provide faster linear
> > search but I guess simplicity of the code and external functionality
> > is the most important demands for this choice
> 
> Any change here (aside from just increasing NPROCS) would have to be done
> with care to avoid a performance hit.  I looked at the history of changes
> to sigproc.cc, and I found commit 4ce15a49 in 2001 in which a static array
> something like cprocs was replaced by a dynamically allocated buffer in
> order to save DLL space.  This was reverted 3 days later (commit e2ea684e)
> because of performance issues.


I wonder what kind of performance issue ? Nevertheless, that old commit
didn't make the number of possible children more dynamic though, when it was
still restricted to NPROCS (or ZOMBIEMAX/NZOMBIES), it was just not
allocated on the stack. But yes, accessing dynamic allocated memory can
theoretically be slower than stack allocated memory, but without measuring
it one cannot tell ;-) Todays hardware is pretty good at prefetching etc,
but as I said, it needs measurements

Looking at the code, I didn't see that many searches (though the existing
ones could be used excessively) and if that is a bottleneck it could be a
good idea to keep the children sorted to be able to do a binary search (that
will give random access containers a huge advantage (theoretically)) instead
of a linear search 

I'm confident that you'll find the best solution to this though, but 256
children is not enough, at least, for us

BTW, when the limit is reached, errno is set to EAGAIN btw, but would ENOMEM
be a more appropriate (regardless if it is NPROCS is reached or that malloc
return NULL) ?

Best regards,
Kristian

> Ken



More information about the Cygwin mailing list