This is the mail archive of the gdb@sourceware.cygnus.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Remote protocol tune up...


Hello,

Per a message on gdb-patches, there has been some discussion about how
to improve the performance of GDB's download.

Apart from the obvious - the lack of windowing in the protocol - a
number of other factors have accumulated to result in less than optimal
download performance.

The first thing to note is that GDB, at several levels
(symfile.c:generic_load(), target.c:target_xfer_memory(), dcache:*,
remote_write_bytes()), breaks transfers down into smaller chunks.  The
problem is that each level doesn't co-operate with the one above or
below and this leads to chunk interference (?) and sub optimal
transfers.  This interference needs to be eliminated - at all times we
should be sending the largest packet possible.

The second problem, which both Frank E and Mark S have noted in earlier
e-mail is that it isn't possible to increase the download packet size to
anything significant.  Both symfile.c:generic_load() with
GENERIC_LOAD_CHUNK and remote_write_bytes() with PBUFSIZ impose fairly
arbitrary download limits.  (There is also dcache but since that is
disabled by default I'm going to ignore it - another FIXME :-)  These
arbitrary limits need to be lifted.


To address this I'll need to make a number of fairly significant changes
(described below).  I'm interested in hearing from people willing to
give the remote code rigorous testing during this transition phase.


* remote_write_bytes()

This will be simplified so that it transfers a single packet and then
returns the number of bytes that the packet contained.  That way the
caller has the opportunity to re-charge a transfer (adding more data)
when possible.  It also means that, in theory at least,
remote_write_bytes() could be handed the entire buffer and transfer it
all in a mega packet if it knew how :-)

I'll also add a FIXME to remote_read_bytes() pointing out that it should
be changed the same way :-)


* target_read_memory_partial(), target_write_memory_partial()

The current target_read_memory_partial() has somewhat bizarre semantics
(IMO).  Further, it's only used by one functionin valprint.c.  I'll move
it to that file and rename it.

New versions of these functions will have very simple semantics.  They
will return the actual number of bytes transferred and will _not_ re-try
when a partial transfer occurs.


* generic_load()

(Ignoring the bug fixes) This function will be changed to use
target_write_memory_partial(). This allows the code to re-charge each
transfer attempt to the maximum allowable (according to the chunk size).
The GENERIC_LOAD_CHUNK will also be replaced by a run-time variable that
even allows chunking to be disabled.  I'll also bump GENERIC_LOAD_CHUNK
up to 512 (2*512 + header/footer <= sizeof (udp packet)), any bids for
1024?


* remote.c:PBUFSIZE

Maybe, just maybe (:-) I'll think about the changes needed to allow
packets larger then a G packet.  Per previous e-mail this involves:

``Fix the remote_write_size so that it is checked and the user informed
if it exceeds certain bounds (<= 0, > PBUFSIZE - ``that Q@$&)(!*$&
5''?). Consistent with the ``set remote packet'' command it should have
an ``auto'' mode where it picks up the PBUFSIZE value.  It would also
need to handle multi-arch which can change PBUFSIZE under your feet.  I
suspect that it will lead to the elimination of min() calls and the
replacement of several macro's with statics.

With that in place, a separate patch to allow very large and possibly
infinite (hey, why set an arbitrary limit? :-) packet sizes.  With
regard to ``(256*1024)'', foke law has it that some hosts do not handle
alloca()s larger than 64k.  If you're going to allow large transfers
then it will need to avoid alloca().''

If someone wants to take this up .... :-)



	comments, complaints?

		Andrew

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]