This is the mail archive of the gdb@sourceware.cygnus.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Remote protocol tune up...



   Date: Thu, 14 Oct 1999 20:41:45 +1000
   From: Andrew Cagney <ac131313@cygnus.com>

   Per a message on gdb-patches, there has been some discussion about how
   to improve the performance of GDB's download.

I like this proposal overall!  As Bill Gatliff pointed out, it will
be important to have ways to ensure that smaller packets will be
sent, either because there are known limitations on the target size,
or because empirical trials show that packets above a certain size
overload something in the system (cpu speed, uart buffer sizes,
driver lameness, etc).

When you've worked out the names and properties of the user-settable
variables, it would be useful to send that by so people can see if
there are enough hooks to make it work with their systems.

   (Ignoring the bug fixes) This function will be changed to use
   target_write_memory_partial(). This allows the code to re-charge each
   transfer attempt to the maximum allowable (according to the chunk size).
   The GENERIC_LOAD_CHUNK will also be replaced by a run-time variable that
   even allows chunking to be disabled.  I'll also bump GENERIC_LOAD_CHUNK
   up to 512 (2*512 + header/footer <= sizeof (udp packet)), any bids for
   1024?

Almost everybody will be happy with 512.  People who need it larger
can set in their .gdbinit and never think about it again.

   With that in place, a separate patch to allow very large and possibly
   infinite (hey, why set an arbitrary limit? :-) packet sizes.  With
   regard to ``(256*1024)'', foke law has it that some hosts do not handle
   alloca()s larger than 64k.  If you're going to allow large transfers
   then it will need to avoid alloca().''

I would be astonished if anybody ever noticed a performance difference
between, say, 64K and 128K packets.  Even multi-megabyte programs will
only need a couple dozen packets at 64K, and it would take a pretty
funky system to make the interaction overhead for those packets add up
to even as much as 1 second.  Such a system will have serious
usability issues during normal debugger usage, downloading speed will
be the least of its problems.

I would rather set an arbitrary limit of 60K than have users deal
with mysterious problems caused by massive alloca failure internally
to GDB.

								Stan


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]