This is the mail archive of the guile@cygnus.com mailing list for the guile project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: tools to make non-conservative GC feasible.


Chris.Bitmead@misys.com.au writes:

>  >> The reason we like "conservative collectors" is because we can be lazy
>  >> while programming and not worry about what does and doesn't have to be
>  >> visible to the collector.
> 
> >I thought the reason we liked conservative collectors is that they
> >work even when interacting with arbitrary C code.
> 
> Well in the case of guile, GC seems to be only stack-conservitive, so you
> have to think about it anyway.

Not really. If you're interfacing with arbitrary c code, the c code is
already doing it's own 'memory management'. If you need to pass it
objects from scheme, then you have to convert it to something the
program recognizes and handle that, but the actual operation of the c
code is something you don't have to care about.

> The reason I like precise GC is partly because I think all objects should
> be subject to GC, even objects that refer to external identities like file
> descriptors, and stuff like that. For all the same reasons its good to GC
> memory, it's also good to GC file descriptors. But I don't trust a
> conservative GC enough to do that. And I also wonder what will happen
> if by some coincidence a program allocates some big chunks of memory whose
> addresses just happen to correspond to numbers held in memory. 

Arbitrary memory won't be held around because of the conservative
collector, only objects that the gc knows about. 

The question is, how often does this actually happen in a real
program? In order for a dead object to never be collected, there has
to be a place on the stack that never changes, or a value that is put
on the stack quite frequently, that corresponds to an object that
guile knows about. It can and does happen, but it's very unlikely that
it will happen with a significant number of objects.

In the file descriptor case, unless I'm missing something, files are
expected to be explicitly closed by scheme, so you aren't really
wasting fds if it stays live, just the space held by the fd structure
(this is unfortunate, because it lessens the reliability of guardians,
although I really doubt you'd see a situation where you ran out of
fd's because they were all being held up by the stack... you'd need
some really bad luck).

> It just
> worries me. Part of a programmers ideal to take care of every case and not
> have anything non-deterministic.
> 

Even the performance of some widely used algorithms (quicksort, for
one, which probably isn't the best example since there are a lot of
work arounds to make the bad case almost completely unlikely, but it's
the first thing that pops to mind) have such a bad worst case that
they could end up executing forever. Generally, it doesn't happen, so
we accept the unlikely possibility and move on. This is a case where I
think you have to consistantly see incorrect behavior in a real
program before you chuck a very useful feature that, by all accounts,
works quite well.

-- 
Greg