This is the mail archive of the
systemtap@sourceware.org
mailing list for the systemtap project.
Re: statistics with intermediate results
- From: fche at redhat dot com (Frank Ch. Eigler)
- To: Martin Peschke <mp3 at de dot ibm dot com>
- Cc: systemtap at sources dot redhat dot com
- Date: 11 Jan 2006 23:07:17 -0500
- Subject: Re: statistics with intermediate results
- References: <43C51A9C.10003@de.ibm.com>
Martin Peschke <mp3@de.ibm.com> writes:
> [...]
> The problem is where to put the first timestamp. It would
> be per request. But when I use dynamic instrumentation, e.g.
> systemtap, then I can't put some spare bytes in a
> per request data structure to store intermediate results.
I don't understand what is blocking you. There is no "per request
data structure" in systemtap - spare or otherwise. You copy values
out of kernel side with the $target variables, and correlate them on
the script side.
You can declare and use as many script-side arrays as you see fit, and
index them as you see fit. As long as you can recompute the same
index tuple (a pid, request pointer address, and/or whatever) at the
probe points that correspond to the beginning and the end of a
computation, just use the array to store the temporaries ("start
time").
Once you have a real result ("elapsed time") you want to store, put
that in a new array, which can be one that carries statistical values.
Use the "<<<" accumulation operator to add values, and the @avg etc.
operators to read results.
> [...] I guess, one could report all events, like send time, receive
> time and so on, through systemtap and defer all processing to a user
> land script. That's the Linux Kernel Event Trace Tool approach:
> [...]
It is a possible way, but not generally necessary for systemtap.
- FChE