Sv: Sv: Sv: Limit for number of child processes

sten.kristian.ivarsson@gmail.com sten.kristian.ivarsson@gmail.com
Fri Aug 28 14:09:02 GMT 2020


Hi all

> > > > > > > It seems like there's a limit of the number of possible
> > > > > > > child processes defined to 256 with 'NPROCS' in
> > > > > > > //winsup/cygwin/child_info.h used in 'cprocs' in
> > > > > > > //winsup/cygwin/sigproc.cc
> > > > > > >
> > > > > > > 256 is quite few possible children in an enterprise
> > > > > > > environment and perhaps the limit should be limited by the
> > > > > > > physical resources or
> > > > > > possibly Windows ?
> > > > > >
> > > > > > The info has to be kept available in the process itself so we
> > > > > > need this array of NPROCS * sizeof (pinfo).
> > > > > >
> > > > > > Of course, there's no reason to use a static array, the code
> > > > > > could just as well use a dynamically allocated array or a linked
> list.
> > > > > > It's just not the way it is right now and would need a patch
> > > > > > or
> > > > rewrite.
> > > > > > [...]
> > > > > A linked list could be used if you wanna optimize (dynamic)
> > > > > memory usage but an (amortized) array would probably provide
> > > > > faster linear search but I guess simplicity of the code and
> > > > > external functionality is the most important demands for this
> > > > > choice
> > > >
> > > > Any change here (aside from just increasing NPROCS) would have to
> > > > be done with care to avoid a performance hit.  I looked at the
> > > > history of changes to sigproc.cc, and I found commit 4ce15a49 in
> > > > 2001 in which a static array something like cprocs was replaced by
> > > > a dynamically allocated buffer in order to save DLL space.  This
> > > > was reverted 3 days later (commit e2ea684e) because of performance
> issues.
> > >
> > > I wonder what kind of performance issue ?
> > > [...]
> > I don't know for sure, but I doubt if it had anything to do with
> > memory access. My guess is that the performance hit came from the need
> > to free the allocated memory after every fork call (see
> sigproc_fixup_after_fork).
> 
> Either way, I rewrote this partially so we now have a default array size
> for 255 child processes on 32 bit and 1023 child processes on 64 bit.
> 
> The new code is mainly a minor update in that it convertes the code
> directly accessing stuff into using a class, encapsulating the mechanism
> used under the hood behind a class barrier and access methods.
> 
> As POC, I added a bit of code to maintain a second array, which is only
> allocated (using HeapAlloc so as not to spill into the child processes) if
> the default array overflows.  This second array adds room for another
> 1023 (32 bit) or 4095 (64 bit) child processes, raising the number of max
> child processes per process to 1278 on 32 bit and 5118 on 64 bit.
> 
> My STC just forking like crazy overflowed my 4 Gigs RAM + 2.5 Gigs
> pagefile after roughly 1450 child processes.  I'm pretty confident that
> this POC implementation is sufficient for a while, even in enterprise
> scenarios.
> 
> And if not, we can now easily tweak the numbers without having to tweak
> much of the code.


That is super

Otherwise amortized dynamic arrays (such as C++ std::vector) does provide
deterministic complexity and is proven work well for most use cases fairly
good with no surprise-hick-ups (hence, amortized). I guess the biggest
problem is (if?) how to determine when to shrink allocated memory when
number of children is reduced

Best regards,
Kristian



> For testing purposes I uploaded a developer snapshot to
> https://cygwin.com/snapshots/
> 
> 
> Corinna
> 
> --
> Corinna Vinschen
> Cygwin Maintainer
> --
> Problem reports:      https://cygwin.com/problems.html
> FAQ:                  https://cygwin.com/faq/
> Documentation:        https://cygwin.com/docs.html
> Unsubscribe info:     https://cygwin.com/ml/#unsubscribe-simple



More information about the Cygwin mailing list