Running 4096 parallel cat processes on remote share results in only 1018 succeeding

Linda Walsh cygwin@tlinx.org
Fri Oct 10 03:07:00 GMT 2014


Nathan Fairchild wrote:
> When I run a script like so:
> cat: /u/pe/env_files/transpath.map: No such file or directory 
> ./run_many.sh: fork: retry: Resource temporarily unavailable 
> ./run_many.sh: fork: Resource temporarily unavailable 
> 
> $ grep -l PATH out* | wc -l 
> 1018 

> I think I'm probably hitting the 256 process limit because of the I/O slowdown the network presents? I don't get this issue running on (much faster) local disk. 
---
You are only reading from the net? or are you copying
to the to the net too?  I.e. what is your CWD?  is out$i.log
on the net or local?  I tried it locally and couldn't
reproduce your symptoms.

Your problem is more likely the server hosting the remote file system.

While you can write files locally that fast, the remote server adds
enough "ms"/file delay that it can't keep up.  Even processing your
Network requests take cpu time.

/i/fdlims> cat mrun
#!/bin/bash
ulimit -n 3200
for i in $(seq $1)
do exec cat mrun >/tmp/tmp$i.log&
done
/i/fdlims> ll /tmp/tmp*|wc
    4096   28672  191405
---
I'm not sure how accurate the ulimit command inside
cygwin is... may be accurate, just saying I don't know.


--
Problem reports:       http://cygwin.com/problems.html
FAQ:                   http://cygwin.com/faq/
Documentation:         http://cygwin.com/docs.html
Unsubscribe info:      http://cygwin.com/ml/#unsubscribe-simple



More information about the Cygwin mailing list