This is the mail archive of the binutils@sources.redhat.com mailing list for the binutils project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

[patch] vtable garbage-collection correction to selective6


Here's a correction to ld-selective/selective6 that I reported and
committed last week.  (BTW, it seems there was also an old deleted test by
that name.)  The problem is that a .vtable_entry reloc is always obeyed.
It should be ignored if the section in which it appears, is thrown away in
the garbage-collect.

I'd like to hear if the following solution is acceptable.

There are some issues I know of: it uses hashtab from libiberty, which has
void * instead of PTR, but already includes ansidecl.h.  Hashtab also uses
xcalloc, which is not allowed from within bfd AFAIK.  I plan to solve that
by adding another creation function to hashtab, perhaps called
htab_create_return_on_fail (better?), and a flag to the structure to mark
that calloc instead of xcalloc should be used for memory allocation.

There's another hash-table implementation in bfd used for strings, but I
couldn't see it fit.  The need is to store an easily searched list of
asection *:s in the domain of all asection *:s of the link, but holding
only a small subset: those asections where there was a .vtable_entry reloc
marking the specific entry in the specific vt that the hash table is for.

Perhaps there are memory usage issues: an array of struct htab *:s is
allocated instead of an array of booleans, and each vt entry in that array
(save for those using the parent's array), will get its own hash table,
which initially uses about 66 bytes on a 32-bit machine for the first
.vtable_entry reloc.  I believe (but haven't measured) that it will in
total be a win, since the memory need for reading in and gc_marking a
section as in elf_gc_mark, would be much higher than that of a hash table
with up to five entries, presumably "holding back" a vt reloc pointing to
an asection that would previously be gc_marked.  All new hash tables are
destroyed before leaving elf_gc_sections.

There are also two arrays that can hold pointers to all asections in the
link.  These arrays are filled with whatever new sections are marked in
elf_gc_mark.  They could be replaced with hash tables, (and there would be
no need for bfd_get_section_id_bound), but I think those are best kept as
arrays.  The only benefit would be the lower memory usage; iterating over
the arrays and adding new sections would be slower.

I also temporarily add a little mark bit to the relocs in the vt, to
identify and so stop them from being processed by elf_gc_mark.  I could
only see a hash table with all vt relocs as an alternative, but that would
mean a new hash table lookup for every reloc.

The conceptual changes are to elf_gc_sections, all the rest is just
support.  The framework of the functions elf_gc_mark_vtentry_relocs and
elf_gc_sweep_vtentry_relocs used elf_gc_smash_unused_vtentry_relocs as a
template.

Comments?
Or is it OK to commit (with a make-dep change to Makefile.am)?

2000-10-02  Hans-Peter Nilsson  <hp@bitrange.com>

	* section.c (bfd_get_section_id_bound): New.
	* elf-bfd.h (struct elf_link_hash_entry): Change type of member
	vtable_entries_used to be struct htab **.
	* elflink.h: Include hashtab.h from libiberty.
 	(struct elf_gc_mark_info, struct elf_gc_info): New.
	(elf_gc_mark): Add parameter gc_info.  All callers changed.
	Do not process relocs marked with ELF_R_VTABLE_MARK_BIT.
	Add recursed sections to gc_info->sections.
	(elf_gc_copy_htab_entry): New.
	(elf_gc_propagate_vtable_entries_used): Adjust to using
	hash-tables for vt entry usage.
	(elf_gc_smash_unused_vtentry_relocs): Similar.
	Add ELF_R_VTABLE_MARK_BIT mark to vt entry relocs whose usage
	depends on whether the user is garbage-collected.
	(elf_gc_mark_vtentry_relocs): New.
	(elf_gc_sweep_vtentry_relocs): New.
	(elf_gc_sections): Change to use a list of marked sections to
	iterate over vt:s checking if the users of the entries are kept
	after garbage collection.
	(elf_gc_record_vtentry): Adjust to mark the user in a hash-table
	for the vt entry.

Index: section.c
===================================================================
RCS file: /cvs/src/src/bfd/section.c,v
retrieving revision 1.23
diff -p -c -r1.23 section.c
*** section.c	2000/09/20 04:20:26	1.23
--- section.c	2000/10/02 10:32:18
*************** bfd_make_section_old_way (abfd, name)
*** 733,738 ****
--- 733,760 ----
    return sec;
  }
  
+ static int section_id = 0x10;  /* id 0 to 3 used by STD_SECTION.  */
+ 
+ /*
+ FUNCTION
+ 	bfd_get_section_id_bound
+ 
+ SYNOPSIS
+ 	int bfd_get_section_id_bound(bfd *abfd);
+ 
+ DESCRIPTION
+    Return a number higher than the highest currently assigned section id.
+    That is, a number higher than the total number of sections that has yet
+    been observed in this bfd session.
+ */
+ 
+ int
+ bfd_get_section_id_bound (abfd)
+      bfd *abfd ATTRIBUTE_UNUSED;
+ {
+   return section_id;
+ }
+ 
  /*
  FUNCTION
  	bfd_make_section_anyway
*************** bfd_make_section_anyway (abfd, name)
*** 755,761 ****
       bfd *abfd;
       const char *name;
  {
-   static int section_id = 0x10;  /* id 0 to 3 used by STD_SECTION.  */
    asection *newsect;
    asection **prev = &abfd->sections;
    asection *sect = abfd->sections;
--- 777,782 ----

Index: elf-bfd.h
===================================================================
RCS file: /cvs/src/src/bfd/elf-bfd.h,v
retrieving revision 1.27
diff -p -c -r1.27 elf-bfd.h
*** elf-bfd.h	2000/08/22 19:33:16	1.27
--- elf-bfd.h	2000/10/02 12:03:02
*************** typedef struct
*** 73,78 ****
--- 73,79 ----
  
  /* ELF linker hash table entries.  */
  
+ struct htab;
  struct elf_link_hash_entry
  {
    struct bfd_link_hash_entry root;
*************** struct elf_link_hash_entry
*** 149,155 ****
       and track a size while the symbol is still undefined.  It is indexed
       via offset/sizeof(target_void_pointer).  */
    size_t vtable_entries_size;
!   boolean *vtable_entries_used;
  
    /* Virtual table derivation info.  */
    struct elf_link_hash_entry *vtable_parent;
--- 150,156 ----
       and track a size while the symbol is still undefined.  It is indexed
       via offset/sizeof(target_void_pointer).  */
    size_t vtable_entries_size;
!   struct htab **vtable_entries_used;
  
    /* Virtual table derivation info.  */
    struct elf_link_hash_entry *vtable_parent;
Index: elflink.h
===================================================================
RCS file: /cvs/src/src/bfd/elflink.h,v
retrieving revision 1.71
diff -p -c -r1.71 elflink.h
*** elflink.h	2000/08/24 17:41:40	1.71
--- elflink.h	2000/10/02 12:03:23
*************** Foundation, Inc., 59 Temple Place - Suit
*** 19,24 ****
--- 19,63 ----
  
  /* ELF linker code.  */
  
+ #include "hashtab.h"
+ 
+ /* We need to add mark bits to the reloc to easily recognize relocs in
+    vtables.  For both ARCH_SIZE 32 and 64, bit 31 of r_info is unlikely to
+    be used; for 32 it is out of the way of used bits, and for 64 it is in
+    the reloc-type, which is normally in the range 0 to hundreds.
+    BFD_ASSERTs are added appropriately, to check that freshly read relocs
+    do not have this bit set.  */
+ #define ELF_R_VTABLE_MARK_BIT ((bfd_vma) 1 << 31)
+ 
+ /* We need to hold an array of sections and a section count.  */
+ 
+ struct elf_gc_mark_info
+ {
+   /* Section table for a set of sections with a common property.  */
+   struct sec **sections;
+ 
+   /* How many entries there are in "sections".  */
+   unsigned int section_count;
+ };
+ 
+ /* This struct is needed to pass information to routines called via
+    elf_link_hash_traverse which must return an error and look at an array
+    of sections and fill in another.  */
+ 
+ struct elf_gc_info
+ {
+   /* The bfd whose contents we're gc:ing. */
+   bfd *abfd;
+ 
+   /* Errors signalled here.  */
+   boolean ok;
+ 
+   struct bfd_link_info *info;
+ 
+   /* We add new marked sections to MARKING, and check those in CHECKING.  */
+   struct elf_gc_mark_info marking, checking;
+ };
+ 
  /* This struct is used to pass information to routines called via
     elf_link_hash_traverse which must return failure.  */
  
*************** static boolean elf_gc_mark
*** 6263,6269 ****
    PARAMS ((struct bfd_link_info *info, asection *sec,
  	   asection * (*gc_mark_hook)
  	     PARAMS ((bfd *, struct bfd_link_info *, Elf_Internal_Rela *,
! 		      struct elf_link_hash_entry *, Elf_Internal_Sym *))));
  
  static boolean elf_gc_sweep
    PARAMS ((struct bfd_link_info *info,
--- 6302,6309 ----
    PARAMS ((struct bfd_link_info *info, asection *sec,
  	   asection * (*gc_mark_hook)
  	     PARAMS ((bfd *, struct bfd_link_info *, Elf_Internal_Rela *,
! 		      struct elf_link_hash_entry *, Elf_Internal_Sym *)),
! 	  struct elf_gc_mark_info *));
  
  static boolean elf_gc_sweep
    PARAMS ((struct bfd_link_info *info,
*************** static boolean elf_gc_propagate_vtable_e
*** 6283,6301 ****
  static boolean elf_gc_smash_unused_vtentry_relocs
    PARAMS ((struct elf_link_hash_entry *h, PTR dummy));
  
  /* The mark phase of garbage collection.  For a given section, mark
     it, and all the sections which define symbols to which it refers.  */
  
  static boolean
! elf_gc_mark (info, sec, gc_mark_hook)
       struct bfd_link_info *info;
       asection *sec;
       asection * (*gc_mark_hook)
         PARAMS ((bfd *, struct bfd_link_info *, Elf_Internal_Rela *,
  		struct elf_link_hash_entry *, Elf_Internal_Sym *));
  {
    boolean ret = true;
- 
    sec->gc_mark = 1;
  
    /* Look through the section relocs.  */
--- 6323,6349 ----
  static boolean elf_gc_smash_unused_vtentry_relocs
    PARAMS ((struct elf_link_hash_entry *h, PTR dummy));
  
+ static boolean elf_gc_mark_vtentry_relocs
+   PARAMS ((struct elf_link_hash_entry *h, PTR dummy));
+ 
+ static boolean elf_gc_sweep_vtentry_relocs
+   PARAMS ((struct elf_link_hash_entry *h, PTR dummy));
+ 
+ static int elf_gc_copy_htab_entry PARAMS ((PTR *, PTR));
+ 
  /* The mark phase of garbage collection.  For a given section, mark
     it, and all the sections which define symbols to which it refers.  */
  
  static boolean
! elf_gc_mark (info, sec, gc_mark_hook, gc_info)
       struct bfd_link_info *info;
       asection *sec;
       asection * (*gc_mark_hook)
         PARAMS ((bfd *, struct bfd_link_info *, Elf_Internal_Rela *,
  		struct elf_link_hash_entry *, Elf_Internal_Sym *));
+      struct elf_gc_mark_info *gc_info;
  {
    boolean ret = true;
    sec->gc_mark = 1;
  
    /* Look through the section relocs.  */
*************** elf_gc_mark (info, sec, gc_mark_hook)
*** 6362,6367 ****
--- 6410,6419 ----
  	  struct elf_link_hash_entry *h;
  	  Elf_Internal_Sym s;
  
+ 	  /* Do not traverse possibly-unused relocs in vtables.  */
+ 	  if (rel->r_info & ELF_R_VTABLE_MARK_BIT)
+ 	    continue;
+ 
  	  r_symndx = ELF_R_SYM (rel->r_info);
  	  if (r_symndx == 0)
  	    continue;
*************** elf_gc_mark (info, sec, gc_mark_hook)
*** 6389,6399 ****
  	    }
  
  	  if (rsec && !rsec->gc_mark)
! 	    if (!elf_gc_mark (info, rsec, gc_mark_hook))
! 	      {
! 		ret = false;
! 		goto out2;
! 	      }
  	}
  
      out2:
--- 6441,6455 ----
  	    }
  
  	  if (rsec && !rsec->gc_mark)
! 	    {
! 	      gc_info->sections[gc_info->section_count++] = rsec;
! 
! 	      if (!elf_gc_mark (info, rsec, gc_mark_hook, gc_info))
! 		{
! 		  ret = false;
! 		  goto out2;
! 		}
! 	    }
  	}
  
      out2:
*************** elf_gc_sweep_symbol (h, idxptr)
*** 6500,6507 ****
  
    return true;
  }
  
! /* Propogate collected vtable information.  This is called through
     elf_link_hash_traverse.  */
  
  static boolean
--- 6556,6580 ----
  
    return true;
  }
+ 
+ /* A htab_trav function that copies a hashtab entry into another hashtab,
+    unless it's already there.  */
+ static int
+ elf_gc_copy_htab_entry (htab_entry_slot, htabvp)
+      PTR *htab_entry_slot;
+      PTR htabvp;
+ {
+   PTR *retp = htab_find_slot ((htab_t) htabvp, *htab_entry_slot, INSERT);
+ 
+   if (retp == NULL)
+     return 0;
+ 
+   *retp = *htab_entry_slot;
+ 
+   return 1;
+ }
  
! /* Propagate collected vtable information.  This is called through
     elf_link_hash_traverse.  */
  
  static boolean
*************** elf_gc_propagate_vtable_entries_used (h,
*** 6518,6524 ****
      return true;
  
    /* If we've already been done, exit.  */
!   if (h->vtable_entries_used && h->vtable_entries_used[-1])
      return true;
  
    /* Make sure the parent's table is up to date.  */
--- 6591,6597 ----
      return true;
  
    /* If we've already been done, exit.  */
!   if (h->vtable_entries_used && h->vtable_entries_used[-1] != NULL)
      return true;
  
    /* Make sure the parent's table is up to date.  */
*************** elf_gc_propagate_vtable_entries_used (h,
*** 6534,6551 ****
    else
      {
        size_t n;
!       boolean *cu, *pu;
  
        /* Or the parent's entries into ours.  */
        cu = h->vtable_entries_used;
!       cu[-1] = true;
        pu = h->vtable_parent->vtable_entries_used;
        if (pu != NULL)
  	{
  	  n = h->vtable_parent->vtable_entries_size / FILE_ALIGN;
  	  while (--n != 0)
  	    {
! 	      if (*pu) *cu = true;
  	      pu++, cu++;
  	    }
  	}
--- 6607,6637 ----
    else
      {
        size_t n;
!       htab_t *cu, *pu;
  
        /* Or the parent's entries into ours.  */
        cu = h->vtable_entries_used;
!       cu[-1] = (htab_t) -1;
        pu = h->vtable_parent->vtable_entries_used;
        if (pu != NULL)
  	{
  	  n = h->vtable_parent->vtable_entries_size / FILE_ALIGN;
  	  while (--n != 0)
  	    {
! 	      if (*pu)
! 		{
! 		  if (*cu == NULL)
! 		    {
! 		      *cu = htab_create (htab_elements (*pu),
! 					 htab_hash_pointer,
! 					 htab_eq_pointer, NULL);
! 		      if (*cu == NULL)
! 			return *(boolean *) okp = false;
! 		    }
! 
! 		  htab_traverse (*pu, elf_gc_copy_htab_entry, *cu);
! 		}
! 
  	      pu++, cu++;
  	    }
  	}
*************** elf_gc_smash_unused_vtentry_relocs (h, o
*** 6586,6598 ****
    for (rel = relstart; rel < relend; ++rel)
      if (rel->r_offset >= hstart && rel->r_offset < hend)
        {
! 	/* If the entry is in use, do nothing.  */
  	if (h->vtable_entries_used
  	    && (rel->r_offset - hstart) < h->vtable_entries_size)
  	  {
  	    bfd_vma entry = (rel->r_offset - hstart) / FILE_ALIGN;
! 	    if (h->vtable_entries_used[entry])
! 	      continue;
  	  }
  	/* Otherwise, kill it.  */
  	rel->r_offset = rel->r_info = rel->r_addend = 0;
--- 6672,6690 ----
    for (rel = relstart; rel < relend; ++rel)
      if (rel->r_offset >= hstart && rel->r_offset < hend)
        {
! 	/* If the entry might be in use, mark it to not be traversed when
! 	   gc_marking through relocs.  */
  	if (h->vtable_entries_used
  	    && (rel->r_offset - hstart) < h->vtable_entries_size)
  	  {
  	    bfd_vma entry = (rel->r_offset - hstart) / FILE_ALIGN;
! 	    if (h->vtable_entries_used[entry] != NULL)
! 	      {
! 		BFD_ASSERT ((rel->r_info & ELF_R_VTABLE_MARK_BIT) == 0);
! 
! 		rel->r_info |= ELF_R_VTABLE_MARK_BIT;
! 		continue;
! 	      }
  	  }
  	/* Otherwise, kill it.  */
  	rel->r_offset = rel->r_info = rel->r_addend = 0;
*************** elf_gc_smash_unused_vtentry_relocs (h, o
*** 6601,6606 ****
--- 6693,6859 ----
    return true;
  }
  
+ /* Iterate over vtentry relocs with the list of gc:ed sections: For
+    vtentries that are found to be used, remove the non-traverse mark and
+    gc_mark the section for the reloc.  */
+ 
+ static boolean
+ elf_gc_mark_vtentry_relocs (h, pp)
+      struct elf_link_hash_entry *h;
+      PTR pp;
+ {
+   asection *sec;
+   bfd_vma hstart, hend;
+   Elf_Internal_Rela *relstart, *relend, *rel;
+   struct elf_backend_data *bed;
+   struct elf_gc_info *gc_infop = (struct elf_gc_info *) pp;
+   asection * (*gc_mark_hook)
+     PARAMS ((bfd *, struct bfd_link_info *, Elf_Internal_Rela *,
+ 	     struct elf_link_hash_entry *, Elf_Internal_Sym *));
+   boolean redo_mark = false;
+ 
+   gc_mark_hook = get_elf_backend_data (gc_infop->abfd)->gc_mark_hook;
+ 
+   /* Take care of both those symbols that do not describe vtables as
+      well as those that are not loaded.  */
+   if (h->vtable_parent == NULL)
+     return true;
+ 
+   BFD_ASSERT (h->root.type == bfd_link_hash_defined
+ 	      || h->root.type == bfd_link_hash_defweak);
+ 
+   sec = h->root.u.def.section;
+   hstart = h->root.u.def.value;
+   hend = hstart + h->size;
+ 
+   relstart = (NAME(_bfd_elf,link_read_relocs)
+ 	      (sec->owner, sec, NULL, (Elf_Internal_Rela *) NULL, true));
+   if (!relstart)
+     {
+       gc_infop->ok = false;
+       return false;
+     }
+ 
+   bed = get_elf_backend_data (sec->owner);
+   relend = relstart + sec->reloc_count * bed->s->int_rels_per_ext_rel;
+ 
+   for (rel = relstart; rel < relend; ++rel)
+     if (rel->r_offset >= hstart && rel->r_offset < hend)
+       {
+ 	/* Check if this reloc is an entry in a vtable, and has not been
+ 	   smashed or found to be used.  */
+ 	if (h->vtable_entries_used
+ 	    && (rel->r_offset - hstart) < h->vtable_entries_size
+ 	    && (rel->r_info & ELF_R_VTABLE_MARK_BIT) == ELF_R_VTABLE_MARK_BIT)
+ 	  {
+ 	    bfd_vma entry = (rel->r_offset - hstart) / FILE_ALIGN;
+ 	    unsigned int n = gc_infop->checking.section_count;
+ 
+ 	    /* For this reloc, check if any of the sections referring to
+ 	       it is in gc_infop->checking.sections, the recently marked
+ 	       sections.  If so, remove the block mark on this reloc, and
+ 	       gc_mark the section it points to, adding new sections to
+ 	       what is in gc_infop->marking.  */
+ 	    while (n--)
+ 	      if (htab_find (h->vtable_entries_used[entry],
+ 			     gc_infop->checking.sections[n]) != NULL)
+ 		{
+ 		  /* Unmark this reloc, so we don't get here again.  */
+ 		  rel->r_info &= ~ELF_R_VTABLE_MARK_BIT;
+ 
+ 		  /* Only re-mark if this section itself has been marked.  */
+ 		  redo_mark = sec->gc_mark;
+ 		  break;
+ 		}
+ 	  }
+       }
+ 
+   /* To relieve ourselves of the lines of copied code to find the section
+      for each reloc and elf_gc_mark them as they're found, we just gc the
+      current section again after removing the marks from vtable relocs we
+      want to process.  The current section for a vtable for normal gc
+      consists of just the vtable.  It unfortunately follows that newly
+      marked sections must be added to gc_infop by the caller, when the
+      natural place would be at the beginning of elf_gc_mark.  Else we
+      would re-add the current section to be re-visited or run through
+      hoops to remove it.  */
+   if (redo_mark
+       && !elf_gc_mark (gc_infop->info, sec, gc_mark_hook, &gc_infop->marking))
+     {
+       gc_infop->ok = false;
+       return false;
+     }
+ 
+   return true;
+ }
+ 
+ /* Sweep and smash unused vtentry relocs; deallocate storage we used.  */
+ 
+ static boolean
+ elf_gc_sweep_vtentry_relocs (h, okp)
+      struct elf_link_hash_entry *h;
+      PTR okp;
+ {
+   asection *sec;
+   bfd_vma hstart, hend;
+   Elf_Internal_Rela *relstart, *relend, *rel;
+   struct elf_backend_data *bed;
+ 
+   /* Take care of both those symbols that do not describe vtables as
+      well as those that are not loaded.  */
+   if (h->vtable_parent == NULL)
+     return true;
+ 
+   BFD_ASSERT (h->root.type == bfd_link_hash_defined
+ 	      || h->root.type == bfd_link_hash_defweak);
+ 
+   sec = h->root.u.def.section;
+   hstart = h->root.u.def.value;
+   hend = hstart + h->size;
+ 
+   relstart = (NAME(_bfd_elf,link_read_relocs)
+ 	      (sec->owner, sec, NULL, (Elf_Internal_Rela *) NULL, true));
+   if (!relstart)
+     return *(boolean *)okp = false;
+   bed = get_elf_backend_data (sec->owner);
+   relend = relstart + sec->reloc_count * bed->s->int_rels_per_ext_rel;
+ 
+   for (rel = relstart; rel < relend; ++rel)
+     if (rel->r_offset >= hstart && rel->r_offset < hend)
+       {
+ 	if (h->vtable_entries_used
+ 	    && (rel->r_offset - hstart) < h->vtable_entries_size)
+ 	  {
+ 	    bfd_vma entry = (rel->r_offset - hstart) / FILE_ALIGN;
+ 
+ 	    if ((rel->r_info & ELF_R_VTABLE_MARK_BIT)
+ 		== ELF_R_VTABLE_MARK_BIT)
+ 	      {
+ 		/* This entry does not have any kept sections referring to
+ 		   it, so lose it.  */
+ 		rel->r_offset = rel->r_info = rel->r_addend = 0;
+ 	      }
+ 
+ 	    /* If we didn't borrow our parents section references, kill
+ 	       this entry.  */
+ 	    if ((h->vtable_parent == (struct elf_link_hash_entry *) -1
+ 		 || (h->vtable_entries_used
+ 		     != h->vtable_parent->vtable_entries_used))
+ 		&& h->vtable_entries_used[entry] != NULL)
+ 	      {
+ 		htab_delete (h->vtable_entries_used[entry]);
+ 		h->vtable_entries_used[entry] = NULL;
+ 	      }
+ 	  }
+       }
+ 
+   if (h->vtable_parent == (struct elf_link_hash_entry *) -1
+       || h->vtable_entries_used != h->vtable_parent->vtable_entries_used)
+     free (h->vtable_entries_used - 1);
+ 
+   return true;
+ }
+ 
  /* Do mark and sweep of unused sections.  */
  
  boolean
*************** elf_gc_sections (abfd, info)
*** 6613,6625 ****
    asection * (*gc_mark_hook)
      PARAMS ((bfd *abfd, struct bfd_link_info *, Elf_Internal_Rela *,
               struct elf_link_hash_entry *h, Elf_Internal_Sym *));
  
    if (!get_elf_backend_data (abfd)->can_gc_sections
        || info->relocateable || info->emitrelocations
        || elf_hash_table (info)->dynamic_sections_created)
      return true;
  
!   /* Apply transitive closure to the vtable entry usage info.  */
    elf_link_hash_traverse (elf_hash_table (info),
  			  elf_gc_propagate_vtable_entries_used,
  			  (PTR) &ok);
--- 6866,6879 ----
    asection * (*gc_mark_hook)
      PARAMS ((bfd *abfd, struct bfd_link_info *, Elf_Internal_Rela *,
               struct elf_link_hash_entry *h, Elf_Internal_Sym *));
+   struct elf_gc_info gc_info;
  
    if (!get_elf_backend_data (abfd)->can_gc_sections
        || info->relocateable || info->emitrelocations
        || elf_hash_table (info)->dynamic_sections_created)
      return true;
  
!   /* Propagate vtable entry usage info.  */
    elf_link_hash_traverse (elf_hash_table (info),
  			  elf_gc_propagate_vtable_entries_used,
  			  (PTR) &ok);
*************** elf_gc_sections (abfd, info)
*** 6632,6640 ****
  			  (PTR) &ok);
    if (!ok)
      return false;
  
!   /* Grovel through relocs to find out who stays ...  */
  
    gc_mark_hook = get_elf_backend_data (abfd)->gc_mark_hook;
    for (sub = info->input_bfds; sub != NULL; sub = sub->link_next)
      {
--- 6886,6908 ----
  			  (PTR) &ok);
    if (!ok)
      return false;
+ 
+   /* Prepare to collect the sections to be kept, for vtable reloc referencing.  */
+   gc_info.abfd = abfd;
+   gc_info.ok = true;
+   gc_info.info = info;
+   gc_info.checking.sections
+     = bfd_zmalloc (bfd_get_section_id_bound (abfd) * sizeof (struct sec *));
+   gc_info.marking.section_count = 0;
+   gc_info.marking.sections
+     = bfd_zmalloc (bfd_get_section_id_bound (abfd) * sizeof (struct sec *));
  
!   if (gc_info.checking.sections == NULL || gc_info.marking.sections == NULL)
!     return false;
  
+   /* Grovel through relocs to find out what sections stay, not counting
+      vtable relocs.  */
+ 
    gc_mark_hook = get_elf_backend_data (abfd)->gc_mark_hook;
    for (sub = info->input_bfds; sub != NULL; sub = sub->link_next)
      {
*************** elf_gc_sections (abfd, info)
*** 6645,6657 ****
  
        for (o = sub->sections; o != NULL; o = o->next)
  	{
! 	  if (o->flags & SEC_KEEP)
!   	    if (!elf_gc_mark (info, o, gc_mark_hook))
! 	      return false;
  	}
      }
  
!   /* ... and mark SEC_EXCLUDE for those that go.  */
    if (!elf_gc_sweep(info, get_elf_backend_data (abfd)->gc_sweep_hook))
      return false;
  
--- 6913,6958 ----
  
        for (o = sub->sections; o != NULL; o = o->next)
  	{
! 	  if ((o->flags & SEC_KEEP) && ! o->gc_mark)
! 	    {
! 	      gc_info.marking.sections[gc_info.marking.section_count++] = o;
! 
! 	      if (!elf_gc_mark (info, o, gc_mark_hook, &gc_info.marking))
! 		return false;
! 	    }
  	}
      }
+ 
+   /* For the newly-marked-kept sections, check if some vtable relocs are
+      referenced by any of them.  If so, the section for those vtable
+      relocs are gc_marked.  Rinse and repeat until no new sections are
+      marked kept.  */
+ 
+   while (gc_info.marking.section_count != 0)
+     {
+       /* Copy over the newly-marked-kept sections, and reset
+ 	 new-found-kept sections.  */
+       struct sec **sections0 = gc_info.checking.sections;
+       gc_info.checking.sections = gc_info.marking.sections;
+       gc_info.marking.sections = sections0;
+       gc_info.checking.section_count = gc_info.marking.section_count;
+       gc_info.marking.section_count = 0;
+ 
+       elf_link_hash_traverse (elf_hash_table (info),
+ 			      elf_gc_mark_vtentry_relocs,
+ 			      (PTR) &gc_info);
+       if (gc_info.ok == false)
+ 	return false;
+     }
+ 
+   /* Sweep vtentries still unreferenced from kept sections.  */
+   elf_link_hash_traverse (elf_hash_table (info),
+ 			  elf_gc_sweep_vtentry_relocs,
+ 			  (PTR) &ok);
+   if (!ok)
+     return false;
  
!   /* Finally, mark SEC_EXCLUDE for those sections that go.  */
    if (!elf_gc_sweep(info, get_elf_backend_data (abfd)->gc_sweep_hook))
      return false;
  
*************** elf_gc_record_vtentry (abfd, sec, h, add
*** 6724,6733 ****
       struct elf_link_hash_entry *h;
       bfd_vma addend;
  {
    if (addend >= h->vtable_entries_size)
      {
        size_t size, bytes;
!       boolean *ptr = h->vtable_entries_used;
  
        /* While the symbol is undefined, we have to be prepared to handle
  	 a zero size.  */
--- 7025,7037 ----
       struct elf_link_hash_entry *h;
       bfd_vma addend;
  {
+   htab_t referencing_sections;
+   struct sec **hashslot;
+ 
    if (addend >= h->vtable_entries_size)
      {
        size_t size, bytes;
!       htab_t *ptr = h->vtable_entries_used;
  
        /* While the symbol is undefined, we have to be prepared to handle
  	 a zero size.  */
*************** elf_gc_record_vtentry (abfd, sec, h, add
*** 6746,6752 ****
  
        /* Allocate one extra entry for use as a "done" flag for the
  	 consolidation pass.  */
!       bytes = (size / FILE_ALIGN + 1) * sizeof (boolean);
  
        if (ptr)
  	{
--- 7050,7056 ----
  
        /* Allocate one extra entry for use as a "done" flag for the
  	 consolidation pass.  */
!       bytes = (size / FILE_ALIGN + 1) * sizeof (htab_t);
  
        if (ptr)
  	{
*************** elf_gc_record_vtentry (abfd, sec, h, add
*** 6755,6762 ****
  	  if (ptr != NULL)
  	    {
  	      size_t oldbytes;
  
- 	      oldbytes = (h->vtable_entries_size/FILE_ALIGN + 1) * sizeof (boolean);
  	      memset (((char *)ptr) + oldbytes, 0, bytes - oldbytes);
  	    }
  	}
--- 7059,7068 ----
  	  if (ptr != NULL)
  	    {
  	      size_t oldbytes;
+ 
+ 	      oldbytes
+ 		= (h->vtable_entries_size/FILE_ALIGN + 1) * sizeof (htab_t);
  
  	      memset (((char *)ptr) + oldbytes, 0, bytes - oldbytes);
  	    }
  	}
*************** elf_gc_record_vtentry (abfd, sec, h, add
*** 6770,6778 ****
        h->vtable_entries_used = ptr + 1;
        h->vtable_entries_size = size;
      }
-   
-   h->vtable_entries_used[addend / FILE_ALIGN] = true;
  
    return true;
  }
  
--- 7076,7102 ----
        h->vtable_entries_used = ptr + 1;
        h->vtable_entries_size = size;
      }
  
+   referencing_sections = h->vtable_entries_used[addend / FILE_ALIGN];
+   if (referencing_sections == NULL)
+     {
+       /* FIXME: The number 6 is chosen arbitrarily.  Should be chosen from
+ 	 measurements on real programs.  */
+       referencing_sections
+ 	= htab_create (6, htab_hash_pointer, htab_eq_pointer, NULL);
+ 
+       if (referencing_sections == NULL)
+ 	return false;
+ 
+       h->vtable_entries_used[addend / FILE_ALIGN] = referencing_sections;
+     }
+ 
+   hashslot
+     = (struct sec **) htab_find_slot (referencing_sections, (PTR) sec, INSERT);
+   if (hashslot == NULL)
+     return false;
+ 
+   *hashslot = sec;
    return true;
  }
  
brgds, H-P


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]