Implementing a generic binary trace interface.

Martin Hunt hunt@redhat.com
Fri Jan 27 10:50:00 GMT 2006


If I understand it correctly, you propose getting a buffer, having
numerous trace() calls, which the translator directly writes into the
buffer, then a call to the runtime to send the accumulated trace data?
How would this interact with normal ASCII data (if you mix printf() with
trace())? Also how do you specify the format of the data stored?

On Thu, 2006-01-26 at 21:57 -0500, Frank Ch. Eigler wrote:
> There are several differences between this scheme (basically a printf
> with a different back-end function in the runtime) and the others.
> The straightforward implementation would require copying of the
> individual data values onto and off the stack, a function call,
> format string parsing in the runtime, and possible problems with
> composing a large trace record from several pieces (e.g.  common
> timestamps).

If you use the trace function I proposed, each call logs as many args as
you give it in the format provided by the format string. There is no
"composing large trace record" problem. Each call is one record.

> Think of my proposed trace() special function as something that is
> compiled right down into a single assignment per field.  The runtime
> would not be called except at the beginning and the end of the probe,
> only to reserve and to flush a trace structure.

That might be good, depending on the details. But it is not more
efficient.

----

While I've got the numbers handy, I did some profiling and here are the
results on my 2.6GHz desktop.

printf() of 6 64-bit integers, converted to ASCII averaged 2.5 usecs
but depends on how large the numbers are. Function call overhead,
including copying the values on the stack, etc was .03 usecs.

I expect a binary printf of 6 64-bit values to take between 0.2 and 0.4
usecs based on a quick prototype. ASCII printf performance could be
improved too with a rewrite.

Martin





More information about the Systemtap mailing list