This is the mail archive of the gdb@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Discussion: Formalizing the deprecation process in GDB



  Ok, I also read the code, but I very much appreciate having good
documentation in book format.  If you've got a serious chunk of architecture
to learn about, it's a lot easier if it's all in one file that you can print
out and browse through at your leisure rather than a page here and a page
there scattered across many files.

(An architecture document is no more than 2 A4 pages, and one diagram - that is extreemly highlevel but gets across the concepts.)


  FWIW I reckon gcc is getting it very right these days.  There's a
heavyweight internals manual that explains the architecture and big picture
issues.  Each file that implements a substantial module of functionality
then also has documentation about its internals and implementation at the
top of the file.  Usually you only need the internals manual, and only if
you find yourself rummaging around in the depths of alias analysis or
something chasing a bug do you find yourself needing the per-file-internal
documentation.

Yes, it's useful to understand why this is.


GCC has a simple pipeline architecture <frontend>-<middleend>-<backend>. It's details can be described at two levels: the interfaces between each "end" (ssa / rtl?); and the internals of a specific "end" (this implements algorithm X).

GDB doesn't have that luxury. It's internals model the state of a running program using objects and their interactions. In such a system it is the complex relationships between the objects that is important, and not the details of a specific bit of code.

For GCC typically a new "end" (or pass) can simply be plugged in, or an existing "end" rewritten.

For GDB, fixing the hard problems involves making changes to those complex object relationships and such requires significant and regular refactoring.

A system that is being continuously re-factored is not well suited for detailed internals documentation - the effort is wasted. Instead the high level architecture and medium level object models that are important.

  Since opinions are being invited, I'll just mention that I'm currently
working on an internal version of gdb for which I'm having to up-port a
5.x-compatible backend to 6.x series.  I sometimes find it *ever* so hard
when faced with yet another deprecated__ this or obsoleted_ that to know
what the new and approved replacement is, and it often takes a combination
of the internals manual, the in-source documentation and comments, and much
searching of the list archive for the actual patch that made the deprecation
to see how it was done at the time and understand the background and
reasoning to it.  I understand the reasons for using this technique and
agree that it's sound engineering practice and necessary for the onward
development of gdb, but I would like an easier solution to the general
problem of knowing what to replace something with, and one that could be
used off-line or on those occasions when sourceware goes down and you can't
get at the list archive!

For multi-arch I wrote a migration document. This time I did not. There is no "migration" path. The correct approach is:
- delete all deprecated code
- build
- run testsuite
- add a missing architecture vector method
- repeat
Instead of migrating, trying to reproduce each refactoring step, you should leap frog.


A running joke between several of the GDB developers at the last GCC summit was that we should present a 1hr paper titled "porting GDB to a new architecture". Only instead of presenting slides, we'd just write the code.

Andrew



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]