This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [patch] Convert frame_stash to a hash table


On 05/16/2013 02:09 PM, Phil Muldoon wrote:
> 
> One of the things I have noticed with frame filters is a significant
> penalty when printing backtraces.  This is because the frame filters
> invalidate the frame_stash.  The reason for this is as follows.
> 
> Python frames do not store the GDB frame, nor should they.  If they
> did, every time the frame cache is invalidated the Python object would
> be pointing to the memory freed by the invalidated frame.  What Python
> frames do store is the frame_id.  This is permanent, and allows Python
> to fetch the frame on demand.
> 
> This on-demand nature is facilitated by a macro in the Python code:
> 
> FRAPY_REQUIRE_VALID
> 
> which fetches the frame with
> 
> frame_find_by_id.
> 
> All Python frame functions call this macro, and thus, all Python frame
> functions end up using frame_find_by_id before each operation.  That's
> fine until you consider that Python frames almost act like a random
> access method of fetching frames, and the existing frame_stash being a
> single entry stash cannot cope with this.
> 
> On single or occasional frame access this is fine and not noticeable.
> But on large numbers of sustained frame accesses, like in backtrace,
> this performance penalty of searching the frame cache (because the
> frame_stash has been invalidated) mounts up and causes a significant
> penalty.  Here are some tests from the existing code in GDB:

I don't think this is the complete picture, particularly the backtrace
case.  I'll try to sum it, based on what I recall from a previous
discussion.  Please correct me if I am wrong.

When doing a backtrace, you'll end up linearly walking the frame
chain, and normally you don't go back to newer frames -- unwind a
frame (frame.prev()), print info about it, unwind the next, print it,
on and on.  As such, a single frame stashed in the frame stash should be
sufficient.  But it's not.  frapy_older does:

  TRY_CATCH (except, RETURN_MASK_ALL)
    {
      FRAPY_REQUIRE_VALID (self, frame);

      prev = get_prev_frame (frame);
      if (prev)
	prev_obj = (PyObject *) frame_info_to_frame_object (prev);
      else

The frame_find_by_id call in FRAPY_REQUIRE_VALID should be hitting
the frame stashed in the frame stash, as it's still the last frame
we printed.  The problem is actually in frame_info_to_frame_object
(called above), which does:

  TRY_CATCH (except, RETURN_MASK_ALL)
    {

      /* Try to get the previous frame, to determine if this is the last frame
	 in a corrupt stack.  If so, we need to store the frame_id of the next
	 frame and not of this one (which is possibly invalid).  */
      if (get_prev_frame (frame) == NULL
	  && get_frame_unwind_stop_reason (frame) != UNWIND_NO_REASON
	  && get_next_frame (frame) != NULL)
	{
	  frame_obj->frame_id = get_frame_id (get_next_frame (frame));
	  frame_obj->frame_id_is_next = 1;
	}

and given the present frame stash can only hold one frame,
these get_prev_frame/get_next_frame calls constantly invalidate it.

Now, I don't get this "detect corrupt stack" code at all.  It raises
a lot of alarm bells to me.  All frames in the frame chain have an unique
(for a given stopped thread) id.  I don't get the reference to an
invalid frame id.  What's that?  null_frame_id?  I'd call it a bug if any
unwinder is returning that.  Outermost frames use outer_frame_id, which
is valid (outer_frame_id actually should die, but that's another story).
The original submission of this code doesn't seem to explain this.

To be clear, I'm not against the hash stash idea at all.  It's likely to
speed up use cases, even if frame_info_to_frame_object was changed
to not do that dance.

-- 
Pedro Alves


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]