This is the mail archive of the systemtap@sources.redhat.com mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: runtime committed to cvs


On Thu, 2005-03-10 at 09:58 -0500, Frank Ch. Eigler wrote:
> Hi -
> 
> hunt wrote:
> 
> > [...]
> > > As mentioned before, this is
> > > not appropriate in the context of an embedded probe that could easily
> > > gobble up GFP_ATOMIC memory.  I believe support for preallocation of
> > > such data, including strings, is critical.  
> > 
> > Currently only small amounts of memory (the length of a string or two)
> > are allocated using GFP_ATOMIC.  
> 
> Really?  I see lots of chunks like
>      m->key1.str = _stp_alloc(strlen(map->c_key1.str) + 1);
>      strcpy(m->key1.str, map->c_key1.str);
> throughout the map code.

That allocates memory for a string key.
 
> > All the memory allocations are written to be able to plug in a
> > different allocation system later.  
> 
> Maybe OK, though it would be unfortunate to require a custom malloc
> implementation in the runtime just because of an impression that
> arbitrary length strings are necessary.

Alternatives?

1. Allocate a big chunk of memory to use for string storage. A malloc
call just grabs some memory from that and advances a pointer by the
length of the string.  Free does nothing.  This is very efficient if we
assume that probes are simply recording information and string in
key/value pairs are not changing often.
 
2. Some kind of memory pool that is enlarged as needed by a work queue.
Isn't this like the memory pool for GFP_ATOMIC?

3. Allocate a big chunk of memory and write our own malloc/free to
manage it.

4 ???

> I posit that is simply because the code hasn't run much in adverse
> enough environments.  Consider trying a module that allocates and
> deallocates a hundred thousand stringy map elements.

We will, of course, do stress testing and performance monitoring.  But
is that scenario realistic?  Hash tables are not going to give good
performance with a hundred thousand keys.

Martin



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]