This is the mail archive of the cygwin mailing list for the Cygwin project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: "du -b --files0-from=-" running out of memory

Hash: SHA1

[adding the upstream coreutils list]

According to Barry Kelly on 11/23/2008 6:24 AM:
> I have a problem with du running out of memory.
> I'm feeding it a list of null-separated file names via standard input,
> to a command-line that looks like:
>   du -b --files0-from=-
> The problem is that when du is run in this way, it leaks memory like a
> sieve. I feed it about 4.7 million paths but eventually it falls over as
> it hits the 32-bit address space limit.

That's because du must keep track of which files it has visited, so that
it can determine whether to recount or ignore hard links that visit a file
already seen.  The upstream ls source code was recently change to store
this information only for command line arguments, rather than every file
visited; I wonder if a similar change for du would make sense.

> Now, I can understand why a du -c might want to exclude excess hard
> links to files, but that at most requires a hash table for device &
> inode pairs - it's hard to see why 4.7 million entries would cause OOM -
> and in any case, I'm not asking for a grand total.
> Is there any other alternative to running e.g. xargs -0 du -b, possibly
> with a high -n <arg> to xargs to limit memory leakage?
> -- Barry

- --
Don't work too hard, make some time for fun as well!

Eric Blake   
Version: GnuPG v1.4.9 (Cygwin)
Comment: Public key at
Comment: Using GnuPG with Mozilla -


Unsubscribe info:
Problem reports:

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]