This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Data for: [PROPOSAL] Checking for supported packets - revised


On Fri, May 12, 2006 at 09:24:50AM -0400, Daniel Jacobowitz wrote:
> I've been having second thoughts about the scheme.  When I showed this
> to Sandra (one of our coworkers at CodeSourcery, for those reading
> along) her primary reaction was that it was too complicated - too
> many cases.

Coming to a conclusion on which one of these options was "best" is not
coming along well; Jim and I and some other coworkers talked about it
for a long while yesterday, and ended up waffling.  Mark suggested that
I put together some hard numbers, so that we could see what we were
really dealing with.

The raw data's down below, but here are some conclusions first.

One of the tests (compare testcases "features" and "smallpkt") shows
that negotiating a large packet size is very helpful: between 20% on a
fast, low latency link, and 300% on a high latency link.  So, that part
is worth including.

The currently unused feature to tell the stub "the GDB client supports
this new protocol feature" from my previous posting is still valuable,
because that wasn't performance related.

As far as performance, compare the "+two" cases with those directly
above them to see the effect of adding two new packet probes
to an existing startup sequence, either already chatty (gdbserver)
or bare metal (stub).  The higher the latency of the link, the higher
the performance cost, naturally.  But for low latency links (serial,
or ethernet LAN) the cost was quite small.

So, avoiding the probes is less important than I thought.  It's
still not completely ignorable.  Jim made some interesting points
about a loss of flexibility by relying on this feature widely.
I have a new proposal, which I will post in a second - this message is
too large already.


The table, with explanations further down:

           sirius   local   nevyn   jahdo    9600   115200

gdbserver  3.311    0.730   0.615   0.380   2.694    0.258
 +two      3.946    0.728   0.862   0.430   2.775    0.269
stub       2.458    0.474   0.518   0.210   0.310    0.042
 +two      2.459    0.384   0.601   0.260   0.382    0.047
features   5.890    0.439   1.256   0.711   9.521    0.851
smallpkt  15.358    2.913   3.628   1.912  11.254    1.053




My notes from testing:

Over TCP, there is definitely a high per-packet cost; if you know anything
about TCP and IP, this makes perfect sense.  Not only is there negotiation
and round trip time involved, but also a lot of extra bytes per packet for
the headers.  When I started testing, gdbreplay did single byte writes,
which translated into single byte TCP packets.  I changed it to write the
whole remote response into a single TCP packet.  Since in a real stub the
ack is usually generated immediately and the packet response much later, I
wrote my test files to send the ack in one write and the response in a
second one.

Testing methodology:

I ran two gdbreplay binaries on symmetric log files, talking to each other. 
Over TCP this involved a couple of netcat binaries to initiate the
connection, since that was simpler than teaching gdbreplay to call
connect().  For TCP, I timed the netcats; for serial, I timed the
gdbreplay session which started with a write (corresponding to the GDB
client).  Timing was done using the 'time' shell builtin; of course, I
was interested in wall clock time, not user or system time.

I also added serial support to gdbreplay (copied from gdbserver).

I ran each test about three times and took the middle time; just eyeballing
here, not being too statistically careful.  The long distance TCP link,
especially, was very noisy.

These were my tests:

1. gdbserver-handshake.txt

I took an x86_64-linux GDB and gdbserver, debugging /bin/cat, and logged
"target remote".  Current, HEAD versions.  The current case for a hosted
system.

2. gdbserver-twomore.txt

I took gdbserver-handshake.txt and manually added two new unsupported
packets that would get probed during connection.

3. stub-shake.txt

I took an arm-none-eabi GDB and connected it to the CodeSourcery RDI stub.
This actually generated the next trace, but I hand-edited it to what it
would have looked like if I'd used a HEAD GDB.  No binary was loaded.

4. stub-twomore.txt

I took stub-shake.txt and manually added two new unsupported
packets that would get probed during connection.

5.  features-shake.txt

I took CodeSourcery's 2006-Q1 arm-none-eabi-gdb and connected it to the
CodeSourcery RDI stub.  This uses qPacketInfo to set a packet size maximum
of 16K and suppress qOffsets, but adds qPart:features and transfers a
good-sized chunk of XML.

6.  features-smallpacket.txt

Same as the previous test, except instead of 16K packet max I used 256 bytes
for the maximum size.  This caused the XML files to be split into much
smaller packets (and many more of them).  And the additional request
packets took more total bytes.  About 10% more bytes and three times as many
packets.


DATA

The rows of these tables are the six tests above.  The columns are testing
configurations:

1. sirius: SSH tunnel from my local system to a machine across the country.
2. localhost: TCP over localhost, Linux
3. nevyn: TCP over 100MBit LAN, Linux
4. jahdo to nevyn: TCP over wireless 802.11g, Cygwin to Linux
5. caradoc to nevyn: Serial 9600 baud connection
6. caradoc to nevyn: Serial 115200 baud connection

           sirius   local   nevyn   jahdo    9600   115200

gdbserver  3.311    0.730   0.615   0.380   2.694    0.258
 +two      3.946    0.728   0.862   0.430   2.775    0.269
stub       2.458    0.474   0.518   0.210   0.310    0.042
 +two      2.459    0.384   0.601   0.260   0.382    0.047
features   5.890    0.439   1.256   0.711   9.521    0.851
smallpkt  15.358    2.913   3.628   1.912  11.254    1.053




-- 
Daniel Jacobowitz
CodeSourcery


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]