This is the mail archive of the
mailing list for the Cygwin project.
Re: "du -b --files0-from=-" running out of memory
- From: Eric Blake <ebb9 at byu dot net>
- To: cygwin at cygwin dot com, bug-coreutils <bug-coreutils at gnu dot org>
- Date: Sun, 23 Nov 2008 07:14:41 -0700
- Subject: Re: "du -b --files0-from=-" running out of memory
- References: <email@example.com>
-----BEGIN PGP SIGNED MESSAGE-----
[adding the upstream coreutils list]
According to Barry Kelly on 11/23/2008 6:24 AM:
> I have a problem with du running out of memory.
> I'm feeding it a list of null-separated file names via standard input,
> to a command-line that looks like:
> du -b --files0-from=-
> The problem is that when du is run in this way, it leaks memory like a
> sieve. I feed it about 4.7 million paths but eventually it falls over as
> it hits the 32-bit address space limit.
That's because du must keep track of which files it has visited, so that
it can determine whether to recount or ignore hard links that visit a file
already seen. The upstream ls source code was recently change to store
this information only for command line arguments, rather than every file
visited; I wonder if a similar change for du would make sense.
> Now, I can understand why a du -c might want to exclude excess hard
> links to files, but that at most requires a hash table for device &
> inode pairs - it's hard to see why 4.7 million entries would cause OOM -
> and in any case, I'm not asking for a grand total.
> Is there any other alternative to running e.g. xargs -0 du -b, possibly
> with a high -n <arg> to xargs to limit memory leakage?
> -- Barry
Don't work too hard, make some time for fun as well!
Eric Blake firstname.lastname@example.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Public key at home.comcast.net/~ericblake/eblake.gpg
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
-----END PGP SIGNATURE-----
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Problem reports: http://cygwin.com/problems.html