This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: why does rwlock prefer readers by default?


On Sat, May 24, 2014 at 12:55:45PM -0400, Rich Felker wrote:
> On Sat, May 24, 2014 at 06:33:06PM +0200, OndÅej BÃlka wrote:
> > > It also means you need to handle the case where it fails.
> > > 
> > > > With readers we let kernel handle that.
> > > 
> > > The kernel? There's only one futex object for an individual contended
> > > address, and if it can't be created, futex wait will return an error
> > > and then you just spin.
> > >
> > Do you have documentation? My point was that futex object needs to
> > remember which threads to wake. Could you cause spinlocks for multiple
> > threads in current implementation, see  nptl/pthread_rwlock_wrlock.c:75 
> 
> A given thread can only be waiting on at most one futex, and even if
> it did require resource allocation to add a waiter, failure would just
> return to userspace without waiting, allowing the caller to retry
> (consuming cpu time but not causing deadlock except possibly in
> realtime scheduling cases).
> 
> > > > A argument that you need unbounded memory usage is misleading as this is
> > > > bounded by number of threads and contention of many readers/writers is bigger
> > > > issue than allocating few extra bytes.
> > > 
> > > The number of threads is essentially unbounded (limit is either ~500
> > > million or ~1 billion, I forget which).
> > > 
> > Still a thread needs its stack which is at least one page. 
> 
> That doesn't impose any limit on 64-bit systems. The range of int,
> used for tids/futexes, and the fact that the upper few bits have
> special meaning in futex, is the limiting factor.
> 
> > > > To detect these you use bloom
> > > > filter: split counter to 8 8-bit counters, for each rwlock randomly generate 
> > > > mask which counters it uses and check if all these counters are zero then yield.
> > > 
> > > I don't see how this is better than choosing a single-bit mask.
> > > Multiple bits just makes it more-likely to prefer a reader when you
> > > shouldn't.
> > > 
> > A problem is how you clear mask when you stop using one lock.
> 
> I don't follow.
> 
So let lock A have mask 1, lock B 2. You do
lock(A); assert(mask == 1);
lock(B); assert(mask == 3);
unlock(A); 
Now how you set mask on unlock?

> > > > There is techical problem that if a counter reaches 255 we could not
> > > > decrease/increase it but it should not happen in practice.
> > > 
> > > What do you do when it does?
> > > 
> > Keep that 255, it only causes prefering readers for that thread.
> 
> In that case what happens when you decrement? You can reach 0 when you
> still hold a read lock, and then calling rdlock again will deadlock
> (attempting to prefer a writer) rather than allowing you to obtain the
> lock recursively.
> 
No, as I wrote you keep that on 255 on decrement.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]