This is the mail archive of the libc-alpha@sources.redhat.com mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

RE: Patch to mutex.c


Title: RE: Patch to mutex.c

Does that mean my fix is not usable and my analysis flawed (not atypical I'm afraid ;-))? If so, where did I go wrong?

I've also noted in pthread_mutex_unlock() for mutex kinds of PTHREAD_MUTEX_RECURSIVE_NP that if a mutex has been unlocked such that the count is 0 then _pthread_unlock() is called. If a routine were to call pthread_mutex_unlock() again with the same mutex then _pthread_unlock() is called again which causes grief. Shouldn't there be a test like...

if (mutex->__m_owner != NULL) {
  mutex->__m_owner = NULL;
  __pthread_unlock(&mutex->__m_lock);
  return 0;
}
else
  return EPERM;

-----Original Message-----
On Wed, 20 Sep 2000 Neale.Ferguson@SoftwareAG-USA.com wrote:

> I have been encountering problems with pthread_mutex_unlock(). For
cases
> where the mutex kind is PTHREAD_MUTEX_ERRORCHECK_NP, when a lock is
called
> it uses __pthread_alt_lock() which results in either a 1 or a non-zero
> address being placed in the mutex status field depending if there are
more
> than one waiters I assume.

What you are describing is the ``original fastlock'' which continues to
be
used for the default mutex attribute.

In the ``alternate fastlock'', the status field is actually a bitfield.
Only
the least significant bit indicates whether or not the lock is currently
owned.
The other bits constitute a pointer to the head of the list of waiting
threads,
which could be null.  (The correct pointer is formed by masking away the
locked
bit).

These two fields are independent. Any combination of states is possible,
in
particular the case when threads are queued, but the lock is not locked.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]