This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Non-uniform address spaces


Michael Eager <eager@eagercon.com> writes:
> Jim Blandy wrote:
>> Michael Eager <eager@eagercon.com> writes:
>>> For an example, the SPEs in a Cell processors could be configured
>>> to distribute pieces of an array over different SPEs.
>>>
>>>> How do you declare such an array?  How do you index it?  What code is
>>>> generated for an array access?  How does it relate to C's rules for
>>>> pointer arithmetic?
>>> In UPC (a parallel extension to C) there is a new attribute "shared"
>>> which says that data is (potentially) distributed across multiple processors.
>>>
>>> In UPC, pointer arithmetic works exactly the same as in C: you can
>>> compare pointers, subtract them to get a difference, and add integers.
>>> The compiler generates code which does the correct computation.
>>
>> All right.  Certainly pointer arithmetic and array indexing need to be
>> fixed to handle such arrays.  Support for such a system will entail
>> having the compiler describe to GDB how to index these things, and
>> having GDB understand those descriptions.  
>
> This may be more something that is better described in an ABI than in
> DWARF.  The compiler may not know how to translate a pointer into
> a physical address.  UPC, for example, allows you to specify the number
> of threads at runtime.
>
> The compiler certainly can identify that an array or other data
> is shared, to use UPC's terminology.  From there, the target code
> would need to perform some magic to figure out where the address
> actually pointed to.

Certainly, an ABI informs the interpretation of the debugging info.
Do you have specific ideas yet on how to convey this information?

>> If those were fixed, how do the other CORE_ADDR uses look to you?
>> Say, in the frame code?  Or the symtab code?
>
> There are uses of CORE_ADDR values which assume that arithmetic
> operations are valid, such as testing whether a PC address is
> within a stepping range.  These are not likely to cause problems,
> because code space generally does conform to the linear space
> assumptions that GDB makes.

Right --- this is what I was alluding to before: most often the
addresses being compared actually come from something known to be
contiguous, so it'll work out.

> There are other places where an address is incremented, such as
> in displaying memory contents.  I doubt that the code knows
> what what it is displaying, only to display n words starting at
> x address in z format.  This would probably result in incorrect
> results if the data spanned from one processor/thread to another.
> (At least at a first approximation, this may well be an acceptable
> restriction.)

Certainly code for printing distributed objects will need to
understand how to traverse them properly; I see this as parallel to
the indexing/pointer arithmetic requirements.  Hopefully we can design
one interface that serves both purposes nicely.

> Symtab code would need a hook which converted the ELF
> <section,offset> into a <processor,thread,offset> for shared
> objects.  Again, that would require target-dependent magic.

Hmm.  GDB's internal representation for debugging information stores
actual addresses, not <section, offset> pairs.  After reading the
information, we call objfile_relocate to turn the values read from the
debugging information into real addresses.  It seems to me that that
code should be doing this job already.

How does code get loaded in your system?  Does a single module get
loaded multiple times?

In GDB, each objfile represents a specific loading of a library or
executable.  The information is riddled with real addresses.  If a
single file is loaded N times, you'll need N objfiles, and the
debugging information will be duplicated.

In the long run, I think GDB should change to represent debugging
information in a loading-independent way, so that multiple instances
of the same library can share the same data.  In a sense, you'd have a
big structure that just holds data parsed from the file, and then a
bunch of little structures saying, "I'm an instance of THAT, loaded at
THIS address."

This would enable multi-process debugging, and might also allow us to
avoid re-reading debugging info for shared libraries every time they
get loaded.

> One problem may be that it may not be clear whether one has a
> pointer to a linear code space or to a distributed NUMA data space.
> It might be reasonable to model the linear code space as a 64-bit
> CORE_ADDR, with the top half zero, while a NUMA address has non-zero
> values in the top half.  (I don't know if there might be alias
> problems, where zero might be valid for the top half of a NUMA address.)

I think this isn't going to be a problem, but it's hard to tell.  Can
you think of a specific case where we wouldn't be able to tell which
we have?

> I'd be very happy figuring out where to put a hook which allowed me
> to translate a NUMA CORE_ADDR into a physical address, setting the
> thread appropriately.  A bit of a kludge, but probably workable.

CORE_ADDR should be capable of addressing all memory on the system.  I
think you'll make a lot of trouble for yourself if you don't follow
that rule.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]