This is the mail archive of the
gdb-patches@sourceware.org
mailing list for the GDB project.
Re: [RFA] Use data cache for stack accesses
On Wednesday 08 July 2009 21:58:44, Pedro Alves wrote:
> On Wednesday 08 July 2009 21:51:40, Daniel Jacobowitz wrote:
> > Or could we store a dcache per-inferior? ?Jacob's right - I thought
> > there was an 'inferior_data' to store arbitrary data per-inferior,
> > but there isn't. ?
> > I don't like baking knowledge into other modules
> > of GDB that they can extract the PID and use it to key per-inferior
> > data.
> >
> > Or just add it to struct inferior?
To complete a thought: if not adding a generic inferior_data so that
modules can put whatever they want there, then putting a new member
on struct inferior exposes some module's internal detail to the rest
of GDB. I'm not sure I like that better either. Note that we
always get the inferior from the pid (it's mostly unavoidable,
we need a way to map a target reported pid to an internal inferior).
Even current_inferior() does that.
> The reason I didn't suggest that is that we don't have an
> address space object --- yet. My multi-exec patches add one, and
> there's a similar mechanism to per-objfile data where we can
> do things like these. More than one inferior can share the
> same address space,
Forgot to be explicit on a real, current, example of
this: debugging parent/child vforks.
> so we could share a cache between them, but,
> if the cache if write-through, it won't be a big problem if we
> don't --- it would be if it was left as writeback.
>
> I don't have a problem with adding this to struct inferior for now,
> it just seemed premature.
>
Actually, I think that if you make it per-inferior, and *don't flush it
when switching inferiors*, making it per-inferior introduces a bug
visible when debugging vforks. Say: stay attached to both parent/child,
get a backtrace when selecting the child; switch to the parent, change
some piece of memory that would be in the parent's stack dcache; switch
back to parent, and observe that the stack cache is now stale.
--
Pedro Alves