This is the mail archive of the
gdb@sourceware.org
mailing list for the GDB project.
gdb -vs- gcc svn trunk
- From: Tom Tromey <tromey at redhat dot com>
- To: GDB Development <gdb at sourceware dot org>
- Date: Tue, 27 Jul 2010 12:15:13 -0600
- Subject: gdb -vs- gcc svn trunk
If you've been reading the patch list, you'll know that I've been
regression testing gdb against gcc svn trunk. I'm done sending patches,
so I thought I would send a little status report.
gdb does not build using gcc svn. I have attached a patch that makes it
build, but I consider this patch a bit dubious. We may be seeing gcc
bugs, in particular the remote.c change is quite fishy.
After all the patches, including the RFC'd dwarf2read.c patch, there is
still a regression:
-PASS: gdb.cp/class2.exp: p acp->c1
-PASS: gdb.cp/class2.exp: p acp->c2
+FAIL: gdb.cp/class2.exp: p acp->c1
+FAIL: gdb.cp/class2.exp: p acp->c2
This is a GCC bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45088
that's it,
Tom
diff --git a/gdb/iq2000-tdep.c b/gdb/iq2000-tdep.c
index 60222fb..a531eb6 100644
--- a/gdb/iq2000-tdep.c
+++ b/gdb/iq2000-tdep.c
@@ -214,10 +214,10 @@ iq2000_scan_prologue (struct gdbarch *gdbarch,
int tgtreg;
signed short offset;
+ sal.end = sal.pc = 0;
if (scan_end == (CORE_ADDR) 0)
{
loop_end = scan_start + 100;
- sal.end = sal.pc = 0;
}
else
{
diff --git a/gdb/remote.c b/gdb/remote.c
index 71eee5d..6f07620 100644
--- a/gdb/remote.c
+++ b/gdb/remote.c
@@ -5560,6 +5560,8 @@ remote_wait (struct target_ops *ops,
{
ptid_t event_ptid;
+ memset (&event_ptid, 0, sizeof (event_ptid));
+
if (non_stop)
event_ptid = remote_wait_ns (ptid, status, options);
else
diff --git a/gdb/xcoffread.c b/gdb/xcoffread.c
index aa6d27e..0637e6d 100644
--- a/gdb/xcoffread.c
+++ b/gdb/xcoffread.c
@@ -961,6 +961,9 @@ read_xcoff_symtab (struct partial_symtab *pst)
CORE_ADDR last_csect_val;
int last_csect_sec;
+ /* Warning avoidance. */
+ memset (&fcn_aux_saved, 0, sizeof (fcn_aux_saved));
+
this_symtab_psymtab = pst;
/* Get the appropriate COFF "constants" related to the file we're