This is the mail archive of the
libc-alpha@sources.redhat.com
mailing list for the glibc project.
Re: rwlocks writer preference patch
- To: Momchil Velikov <velco at fadata dot bg>
- Subject: Re: rwlocks writer preference patch
- From: Kaz Kylheku <kaz at ashi dot footprints dot net>
- Date: Thu, 9 Nov 2000 12:11:58 -0800 (PST)
- cc: GLIBC List <libc-alpha at sources dot redhat dot com>, Joel Klecker <debian-glibc at lists dot debian dot org>
On Thu, 9 Nov 2000, Momchil Velikov wrote:
> Date: Thu, 09 Nov 2000 12:52:44 +0000
> From: Momchil Velikov <velco@fadata.bg>
> To: GLIBC List <libc-alpha@sources.redhat.com>
> Cc: Joel Klecker <debian-glibc@lists.debian.org>
> Subject: rwlocks writer preference patch
>
> Hi,
>
> The 2.1.3 implementation of rwlocks (and the current CVS, FWIW)
> still allows for excessive starvation for writers, no matter
> if writer preference is set. The attached patch makes a waiting
> writer the owner of the lock upon wakeup.
>
> Regards,
> -velco
>
> PS. Joel, note this hunk from the patch, this bug is already fixed in
> the CVS, but you might still want to add it to the Debian patches
> (along with the rest).
>
> @@ -362,7 +364,8 @@
> }
> rwlock->__rw_writer = NULL;
>
> - if (rwlock->__rw_kind == PTHREAD_RWLOCK_PREFER_READER_NP
> + if ((rwlock->__rw_kind == PTHREAD_RWLOCK_PREFER_READER_NP
> + && !queue_is_empty(&rwlock->__rw_read_waiting))
> || (th = dequeue (&rwlock->__rw_write_waiting)) == NULL)
> {
> /* Restart all waiting readers. */
Indeed, this one was fixed long ago. In fact, it was fixed on January 17, 2000
in the main trunk. For some reason this did not go into the 2.1 branch.
>@@ -313,7 +313,9 @@
> while(1)
> {
> __pthread_lock (&rwlock->__rw_lock, self);
>- if (rwlock->__rw_readers == 0 && rwlock->__rw_writer == NULL)
>+ if (rwlock->__rw_readers == 0
>+ && (rwlock->__rw_writer == NULL
>+ || rwlock->__rw_writer == self))
> {
> rwlock->__rw_writer = self;
> __pthread_unlock (&rwlock->__rw_lock);
I understand what this is trying to do: namely ensure atomic handoff of the
write lock from one writer to another.
However, this fix is dangerous because it changes the behavior of the write
lock operation upon the programming error of placing a recursive lock. If a
thread write locks the same lock twice, this function will happily now return 0
instead of deadlocking the thread.
Although any response to undefined behavior is correct, in my experience with
debugging for these kinds of problems, they are easier to find if the thread is
just allowed to deadlock.
The next best thing is to return EDEADLK as suggested by the spec. So this
patch would be okay if we add an initial test for rwlock->__rw_writer == self
and return EDEADLK.
Alternately, the test for a successful handoff can be done after the suspend().
In other words, the loop is restructured like this:
__pthread_lock (&rwlock->__rw_lock, self);
while(1)
{
if (rwlock->__rw_readers == 0 && rwlock->__rw_writer == NULL)
{
rwlock->__rw_writer = self;
__pthread_unlock (&rwlock->__rw_lock);
return 0;
}
/* Suspend ourselves, then try again */
enqueue (&rwlock->__rw_write_waiting, self);
__pthread_unlock (&rwlock->__rw_lock);
suspend (self); /* This is not a cancellation point */
__pthread_lock (&rwlock->__rw_lock, self);
/* Check if someone handed us ownership of the lock */
if (wrlock->__wr_writer == self)
{
__pthread_unlock (&rwlock->__rw_lock);
return 0;
}
}
This way, there is no behavior change on recursive deadlock.