This is the mail archive of the
mailing list for the glibc project.
Re: Race unlocking-locking mutex
- From: David Ahern <dsahern at gmail dot com>
- To: Ángel González <keisial at gmail dot com>
- Cc: Carlos O'Donell <carlos at systemhalted dot org>, Godmar Back <godmar at gmail dot com>, "libc-help at sourceware dot org" <libc-help at sourceware dot org>
- Date: Sat, 14 Sep 2013 14:47:08 -0600
- Subject: Re: Race unlocking-locking mutex
- Authentication-results: sourceware.org; auth=none
- References: <5231ECBE dot 4030006 at gmail dot com> <CAB4+JY+dBDVhk9UJWXRXrxF5BhoTFXv==_v+ibdEBU5Noj4aEw at mail dot gmail dot com> <5231F2C3 dot 6080305 at gmail dot com> <CAE2sS1gp9Z5b5qoSqgBXbbL-S+_fHaL1PN7z=CfMsqSTA=7LFg at mail dot gmail dot com> <52336631 dot 5030402 at gmail dot com>
On 9/13/13 12:23 PM, Ángel González wrote:
On 13/09/13 10:53, Carlos O'Donell wrote:
I expect that glibc will only be interested in an implementation if
you can show a considerable performance boost for having the library
implement the ticket lock.
Even though "the scheduling policy shall determine which thread shall
acquire the mutex", it seems weird that the thread which ends up
acquiring the mutex is not one of the threads which were blocked waiting
David problem should be solved if the unlock atomically assigned it to
the waken thread (instead of waiting for it to reacquire the mutex).
However, as it is a kernel decision, I think it would require a new
futex operation, which stored in uaddr2 the waken tids. Then
mutex->__data.__owner could be passed as uaddr2 and the mutex considered
also locked if owner != 0.
Not necessarily proposing this for libc, but the product I work on needs
a solution for this problem that invokes some kind of fairness -- and
without adding too much complexity.
One option that comes to mind is adding a new element in the mutex -- a
__prev_owner. On the unlock path set __prev_owner to thread id, release
lock, awaken a waiter. If it is an uncontended lock (no tasks awakened
or waiting), set __prev_owner to 0.
On the lock path check if __prev_owner is equal to thread id and if so
take the slow path into the kernel.
Any particular worrisome race conditions to be wary of?
Carlos: I was careful not to use the word 'bug' in my responses. ;-)
There's an interesting reading at https://lwn.net/Articles/267968/
(Ticket spinlocks), including some impressive numbers. The kernel
spinlocks were having pretty much the same problem, with the processor
which released the spinlock reacquiring it before the waiters (probably
due to owning the cache line).