This is the mail archive of the gdb@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: why is gdb 5.2 so slow


On Fri, Nov 01, 2002 at 11:55:26AM -0500, Andrew Cagney wrote:
> 
> >Both.  Things we do wrong:
> >- GDB can't handle being told that just one thread is stopped.  If we
> >could, then we wouldn't have to stop all threads for shared library
> >events; there's a mutex in the system library so we don't even have to
> >worry about someone hitting the breakpoint.  We could also use this to
> >save time on conditional breakpoints; if we aren't stopping, why stop
> >all other threads?
> 
> [my guess] If the condition fails, we need to thread-hop.  If the 
> condition succeeds we need to stop all threads anyway.

Oh, blast it.  So we can't use this for general conditional
breakpoints.  If the condition is true we stop all threads before
giving a prompt; if the condition is false we stop all threads in order
to step over the breakpoint.

We could do thread-specific breakpoints hit by the wrong thread this
way... and thread-specific conditional breakpoints hit by the right
thread but with the condition false could _probably_ be done this way
but implementing it would be complicated.

Now, thread-specific breakpoints hit by the wrong thread could be used
to speed up "next"/software-single-step...

> Knowing that shlibs are wrapped in a mutex is definitly something to 
> exploit.
> 
> >- Removing all breakpoints, that's just wrong, there's a test in
> >signals.exp (xfailed :P) which shows why.  We should _only_ be removing
> >any breakpoints at the address we're hopping over.
> >
> >- No memory cache by default.  thread_db spends a LOT of time reading
> >from the inferior.
> 
> Based on a verbal description I was given, I believe that the current 
> dcache model is slightly wrong.  It should behave more like the regcache 
> vis:
> - ask for one register, get back the register file
> hence:
> - ask for one byte, get back one page, OR
> - ask for one byte, mmap the entire target process address space
> That way the target decides.
> 
> HP, long ago, was proposing zero copy target memory accesses.
> 
> >- No ptrace READDATA request for most Linux targets to read a large
> >chunk.  I keep submitting patches for some other ptrace cleanups that
> >will let me add this one to the kernel, and they keep hitting a blank
> >wall.  I may start maintaining 2.4 patches publicly and see if people
> >use them!
> 
> Uli (glibc), KevinB, MichaelS, and I happened to be in the same room and 
> talked about this.  /procfs was suggested as an alternative path.  For 
> ptrace() Uli indicated something about running out of register arguments 
> to use across a system call.

I don't know what he's referring to... wait... request, pid, len,
target address, buffer.  x86 can only do four.  Crappy but it could be
worked around.

In any case, this reminded me of something I keep forgetting.  Modern
kernels a ptrace-attached process can open the child's /proc/<pid>/mem
and read from it.  Writing to it is disabled, and mmap is not
implemented (oh the violence to the mm layer if that was allowed!). 
But reading from it is probably faster than PTRACE_PEEKTEXT.  I'll
investigate.

> >- Too many calls to thread_db in the LinuxThreads case.  It's a nice
> >generic layer but implemented such that the genericity (? :P) comes
> >with a severe cost in performance.  We need most of the layer; I've
> >seen the NGPT support patch for GDB, and it's very simple, precisely
> >because of this layer.  But we could do staggeringly better if we just
> >had a guarantee that there was a one-to-one, unchanging LWP<->thread
> >correspondence (no userspace scheduling etc.).  Both LinuxThreads and
> >the new NPTL library have this property.  Then we don't need to use
> >thread_db to access the inferior at all, only to collect new thread
> >information.
> 
> Apparently that guarentee is comming.  Solaris, for instance, is moving 
> back to 1:1.  My instinct is that reduceing the system calls will make a 
> far greater improvement than trimming back glibc.

Well, I'd hate to lose NGPT support; like it or not a lot of people
(especially in the carrier-grade space) are starting to use it.  At the
same time we don't need to be using the heavyweight interface when it
isn't needed.


-- 
Daniel Jacobowitz
MontaVista Software                         Debian GNU/Linux Developer


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]