This is the mail archive of the guile@cygnus.com mailing list for the guile project.
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |
Telford Tendys <telford@triangle.triode.net.au> writes: > The trouble with a hash table is that it needs to modulo the hash numbers > by the table size, when you change the table size you change the modulo > and suddenly the objects need to go into different slots. Thus you > `rehash' the whole table into a new table. The standard Java library > has an implementation of a hash table that calculates it's own sparsity > and rehashes itself when it feels that it needs to. > > Hash tables are fast to pull things out of and fast to put things > into but slow to resize -- them's the breaks. Well, yes and no. The standard hash table implementation is like that. There are a number of other hash table algorithms; one, in particular, only requires rehashing one bucket at a time as it resizes. Still as much work, but not all in one lump. We'd also want some good string-hashing functions preimplemented. Doing it right is hard. > c) ?? - not quite sure how to apply this to B-Trees, > would appreciate if someone explained the issue of weak > and double-weak keys to me. If you want to use your hash table as a cache of some sort, you wouldn't want it preventing GC. So a weak hash table stores keys and items, but doesn't count as a reference to the item. A double-weak hash table doesn't count as a reference to the keys either. Andrew