This is the mail archive of the libc-ports@sources.redhat.com mailing list for the libc-ports project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PING][RFC][BZ #1874] Fix assertion triggered by thread/fork interaction


ping

On Wed, Oct 09, 2013 at 10:05:34PM +0200, OndÅej BÃlka wrote:
> Hi,
> 
> This bug had a simple patch for five years without reply. 
> https://sourceware.org/bugzilla/show_bug.cgi?id=4578
> Could someone comment this?
> 
> It was detected on custom chip, could this be replicated on other
> architectures?
> 
> An analysis from bugzilla and patch are below
> 
> "
> Details:
> 
> If a thread happens to hold dl_load_lock and have r_state set to RT_ADD
> or RT_DELETE at the time another thread calls fork(), then the child exit
> code from fork (in nptl/sysdeps/unix/sysv/linux/fork.c in our case)
> re-initializes dl_load_lock but does not restore r_state to RT_CONSISTENT.
> If the child subsequently requires ld.so functionality before calling exec(),
> then the assertion will fire.
> 
> The patch acquires dl_load_lock on entry to fork() and releases it on exit
> from the parent path.  The child path is initialized as currently done.
> This is essentially pthreads_atfork, but forced to be first because the
> acquisition of dl_load_lock must happen before malloc_atfork is active
> to avoid a deadlock.
> "
> 
> --- glibc-2.5-sources/nptl/sysdeps/unix/sysv/linux/fork.c
> 2007-05-29 23:44:33.000000000 -0400
> +++ glibc-2.5-modified/nptl/sysdeps/unix/sysv/linux/fork.c
> 2007-05-31 15:07:18.712221827 -0400
> @@ -27,6 +27,7 @@
>  #include "fork.h"
>  #include <hp-timing.h>
>  #include <ldsodefs.h>
> +#include <bits/libc-lock.h>
>  #include <bits/stdio-lock.h>
>  #include <atomic.h>
>  
> @@ -59,6 +60,8 @@
>      struct used_handler *next;
>    } *allp = NULL;
>  
> +  /* grab ld.so lock BEFORE switching to malloc_atfork */
> +   __rtld_lock_lock_recursive (GL(dl_load_lock));
>    /* Run all the registered preparation handlers.  In reverse order.
>       While doing this we build up a list of all the entries.  */
>    struct fork_handler *runp;
> @@ -208,6 +211,8 @@
>  
>  	  allp = allp->next;
>  	}
> +      /* unlock ld.so last, because we locked it first */
> +      __rtld_lock_unlock_recursive (GL(dl_load_lock));
>      }
>  
>    return pid;


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]