This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [rfc][3/3] Remote core file generation: memory map


Jan Kratochvil wrote:
> On Fri, 21 Oct 2011 20:57:04 +0200, Ulrich Weigand wrote:
> > Note that there already is a qXfer:memory-map:read packet, but this
> > is not usable as-is to implement target_find_memory_regions, since
> > it is really intended for a *system* memory map for some naked
> > embedded targets instead of a per-process virtual address space map.
> > 
> > For example:
> > 
> > - the memory map is read into a single global mem_region list; it is not
> >   switched for multiple inferiors
> 
> Without extended-remote there is a single address map only.  Is the memory map
> already useful with extended-remote using separate address spaces?
> 
> I do not have the embedded memory map experience but it seems to me the memory
> map should be specified for each address map, therefore for each inferior it
> is OK (maybe only possibly more duplicates are sent if the address spaces are
> the same).  If GDB uses the memory map it uses it already for some inferior
> and therefore its address space.

The problem is that the way GDB uses the memory map is completely
incompatible with the presence of multiple address spaces.

There is a single instance of the map (kept in a global variable
mem_region_list in memattr.c), which is used for any access in
any address space.  lookup_mem_region takes only a CORE_ADDR;
the "info mem" commands only operate on addresses with no notion
of address spaces.  The remote protocol also does not specify
which address space a map is requested for.

This doesn't appear to matter much in practice, since the native
targets and gdbserver do not implement memory maps at all.  Just
some special-purpose remote stubs apparently do; and those are
probably for targets that do not support multiple address spaces.

However, this means that it isn't easily possible to just switch
to providing memory maps for native/gdbserver target, because we
now run into those problems ...

> I need to implement core files reading support into gdbserver in a foreseeable
> future for performance reasons.  For the core file case everything can be
> indefinitely cached (and it is more significant to cache it than in the local
> core file case).  The caching can+should be improved even in the normal live
> process case (by setting default_mem_attrib->cache = 1) but it needs to be
> temporary (with the prepare_execute_command flushing).  For embedded targets
> the caching should be disabled for memory-I/O regions even if it would get
> enabled otherwise.
> 
> The caching should probably stay in the memory map and not be moved into the
> process map.  This all suggests me separation in the submitted patch may
> complicate it all a bit.

Yes, if you want to enable memory-map features on gdbserver targets, then
those problems will need to be fixed.  In *that* case, it would make more
sense to avoid introducing a new map.

> > +const struct gdb_xml_attribute vma_attributes[] = {
> > +const struct gdb_xml_element process_map_children[] = {
> > +const struct gdb_xml_element process_map_elements[] = {
> 
> These should be static; it is already a bug in memory-map.c but there are too
> many such bugs, someone could spend some time fixing them, one could use my:
> 	http://git.jankratochvil.net/?p=nethome.git;a=blob_plain;hb=HEAD;f=bin/checkstatic

Fixed, thanks.

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU Toolchain for Linux on System z and Cell BE
  Ulrich.Weigand@de.ibm.com


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]