This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [ltt-dev] Linux Trace terminology - feedback requested


> as a first step I'm working on some terminology
> definitions.

Very good idea!

>  * event - an instruction location or system state at a specific point in time
>  * event data - information related to an event
>     * ie. In some trace systesm, event data size can vary depending on the event.
>  * capture - the act of recording event information
event capture

>  * trace point - a location in the traced software, where an event is "emitted"
>  * trace buffer - location where trace data is stored at time of capture
>  * trace log - location where trace data is stored long-term
>  * configuration interface - the API or mechanism used to configure the tracing engine
You may want to define the "tracing engine" and then use the name
"tracing engine configuration interface".

>  * control interface - the API or mechanism used to control the tracing engine
tracing engine control interface

>  * transfer interface - the API or mechanism used to move the trace data from
>    kernel to user space
trace transfer interface

>  * trace time - the time when the trace is active
>     * ie The trace buffer may be accessed at trace time, that is, while the trace is active.
>  * post-processing - manipulation of the trace data after the trace is collected
trace post-processing

>  * configuration - the set of constraints which determine what events are collected
>    and how they are processed in a trace
tracing engine configuration

>  * static tracepoint - a trace point statically compiled into the software being traced
>  * dynamic tracepoint - a trace point dynamically added to the software being traced

You may want to distinguish tracepoints which are labeled directly in
the source code ("in source tracepoint", e.g. printk like statement
permanently in the source file) from those which are described elsewhere
("externally described tracepoint", e.g. gdb breakpoint described in a
gdb script or dynamic kernel probe described in a script). In source
tracepoints could be "statically compiled out" (through proper C
preprocessor magic) or compiled in as "static tracepoints". They may
even be statically compiled out but later inserted as dynamic
tracepoints. Similarly, "externally described tracepoints" could be
compiled in as static tracepoints through the use of a special source
code preprocessor, or could be inserted as dynamic tracepoints.

Of course the typical case is "in source tracepoints" used as "static
tracepoints" and "externally described tracepoints" used as "dynamic
tracepoints". Nonetheless, it is important to at least stress the fact
that static tracepoints can be compiled out for zero overhead.

>  * aggregation - updating statistics or other analytical information, based on trace events
>      * ie. SystemTap can do aggregation at trace time, while KFT and LTTng do
>        aggregation during post-processing (mostly).
Aggregation during postprocessing is a given. Perhaps you want to more
specifically refer to dynamic aggregation (simple computation performed
at event capture time?

>  * filters - criteria used to limit the events that are processed or captured
>  * triggers - criteria used to start and stop tracing automatically

Trace point activation is an important topic for which more options
should appear in LTT in the near future. The activation of a group of
trace points (e.g. networking events) could be performed statically (at
compile time) or dynamically. It could be implemented at the trace point
location (for instance as self modifying code containing a conditional
instruction testing an immediate boolean or NOPs used as placeholders
for the tracing instructions), in which case the activation takes effect
globally. It could also be implemented as filtering on a trace by trace
basis (user A wants all his processes traced for scheduling and I/O
events, user B wants one process traced for system calls and networking
events).

Perhaps we should talk about "static trace point activation" (compiled
in or not) and "dynamic trace point activation" (activated or not at the
trace point, for all traces). Then, "filtering" could be used to decide
if a specific event gets stored in a specific trace when more than one
trace is recorded at a time.



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]