This is the mail archive of the
gdb@sources.redhat.com
mailing list for the GDB project.
Re: [mi] watchpoint-scope exec async command
- From: "Eli Zaretskii" <eliz at gnu dot org>
- To: GDB <gdb at sources dot redhat dot com>
- Date: Sat, 02 Apr 2005 12:50:14 +0300
- Subject: Re: [mi] watchpoint-scope exec async command
- References: <20050329013634.GB6373@nevyn.them.org> <20050329024945.GC3957@white> <20050329020123.GA7266@nevyn.them.org> <01c534a6$Blat.v2.4$944e44a0@zahav.net.il> <20050329214414.GA3498@nevyn.them.org> <01c53564$Blat.v2.4$1da3c140@zahav.net.il> <20050331014749.GA264@white> <01c535ab$Blat.v2.4$c21baac0@zahav.net.il> <20050331205826.GA1590@white> <01c5369a$Blat.v2.4$2f0a6100@zahav.net.il> <20050401141105.GB29152@nevyn.them.org>
- Reply-to: Eli Zaretskii <eliz at gnu dot org>
> Date: Fri, 1 Apr 2005 09:11:05 -0500
> From: Daniel Jacobowitz <drow@false.org>
> Cc: GDB <gdb@sources.redhat.com>
>
> Actually, I don't think software watchpoints need it at all. A
> software watchpoint is implemented primarily by single-stepping the
> inferior, right? Well, after every single-step we know whether or not
> the breakpoint is still in scope...
That's true, but running the code that checks whether the watchpoint
is still in scope after each instruction would slow down GDB even
more, while the scope breakpoint doesn't add any slowdown.
Of course, this is all based on speculative arguments, at least from
my side, so it could be 100% wrong. If someone who reads this knows
for a fact why scope breakpoints were introduced, please speak up.