This is the mail archive of the
guile@sourceware.cygnus.com
mailing list for the Guile project.
Re: Some profile results: scm_sloppy_assq?
- To: Jan Nieuwenhuizen <janneke at gnu dot org>
- Subject: Re: Some profile results: scm_sloppy_assq?
- From: Mikael Djurfeldt <mdj at mdj dot nada dot kth dot se>
- Date: 14 Jun 2000 15:36:39 +0200
- Cc: guile at sourceware dot cygnus dot com, "ir. Wendy" <hanwen at cs dot uu dot nl>
- Cc: djurfeldt at nada dot kth dot se
- References: <200006141256.OAA23278@appel.dyndns.org>
Jan Nieuwenhuizen <janneke@gnu.org> writes:
> Did anything weird happen to scm_sloppy_asc?
(I presume you mean scm_sloppy_assq?)
No.
> Of course, it could be
> that 1.3.4's version for some reason worked remarkably well with
> LilyPond's (in that case probably ill-using-guile) internals.
>
> For 1.3.5, the number of calls increased by 1.5 but uses rediculously
> more cycles, 626 seconds iso 0.38 seconds?
scm_sloppy_assq is used to find the right key/data pair in a hash
bucket. If a hash table is too small, or if the hash function
performs poorly, this will result in long lists of key/data pairs in
the hash table. This, in turn, will means more work for
scm_sloppy_assq. (If the lists contain just one element, the work in
scm_sloppy_assq will be almost zero.)
Still, it seems strange. If I read your figures correctly,
scm_sloppy_assq seems to dominate execution time. This could indicate
some newly introduced bug.
> ((gc-time-taken . 9077) (cells-allocated . 4262286) (cell-heap-size . 5056864) (bytes-malloced . 34083349) (gc-malloc-threshold . 35329605) (cell-heap-segments (270957392 . 270909344) (808214080 . 807845896) (808406632 . 808214536) (808823256 . 808407048) (809513184 . 808824840) (810670224 . 809517072) (812609104 . 810672136) (814706592 . 812609544) (816803744 . 814706696) (818900896 . 816803848) (820998048 . 818901000) (823095200 . 820998152) (825192352 . 823095304) (827289504 . 825192456) (829386656 . 827289608) (831483808 . 829386760) (833580960 . 831483912) (835678112 . 833581064) (837776320 . 835678224) (839876512 . 837779464) (841973664 . 839876616) (844071872 . 841973776) (846172064 . 844075016) (848269216 . 846172168)))
> user: 2756.92(275692) system: 5.20(520)
> elapsed: 2768.39
> MAXSIZE: 96.066M(24593), MAXRSS: 96.066M(24593)
> AVGSIZE: 72.281M(18504), AVGRSS: 72.281M(18504)
BTW, the current GC parameters aren't tuned for such a large
application. You can define the following environment variables:
GUILE_INIT_SEGMENT_SIZE_1 Size of initial heap segment in bytes
(default = 360000)
GUILE_MIN_YIELD_1 Minimum number of freed cells at each
GC in percent of total heap size
(default = 40)
GUILE_MAX_SEGMENT_SIZE Maximal segment size
(default = 2097000)
For example, if you increase GUILE_MAX_SEGMENT_SIZE, you'll get fewer
heap segments.