This is the mail archive of the libc-alpha@sources.redhat.com mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

RE: Patch to mutex.c


On Wed, 20 Sep 2000 Neale.Ferguson@SoftwareAG-USA.com wrote:

> Date: Wed, 20 Sep 2000 13:14:09 -0400
> From: Neale.Ferguson@SoftwareAG-USA.com
> To: kaz@ashi.footprints.net
> Cc: libc-alpha@sources.redhat.com
> Subject: RE: Patch to mutex.c
> 
> Thanks for the quick reponse. 
> 
> re: alt_unlock()
> ----------------
> 
> for PTHREAD_MUTEX_ERRORCHECK_NP types, pthread_mutex_lock will 
>  - call __pthread_alt_lock() which will:
>  - check that lock->__status == 0, if so it will set newstatus to 1
>                                    else set newstatus to &wait_node 
>  - compare_and_swap making sure __status still = oldstatus and if so swap in
> newstatus to __status (i.e. 1 or &wait_node)

No, to grab the lock, you must swap in (oldstatus | 1), on proviso that the
status location is equal to oldstatus. This can be attempted attempted only if
the LSB is clear ((oldstatus & 1) == 0).

The essential idea is to atomically flip the LSB while preserving the value of
all the other bits.

> 
> for PTHREAD_MUTEX_ERRORCHECK_NP types, __pthread_mutex_unlock will:
>  - check that the lsb of __status is on (and owner is the current thread)
>  - if so, it will call alt_unlock() 
>  - However, if __status actually == &wait_node then it will not call
> alt_unlock() and just return EPERM (unless address of wait_node is on an
> oddbyte boundary)

The assumption is that the address of a wait_node type *cannot* be on an odd
byte boundary. It's a struct that has a minimum alignment of 4; whenever such a
struct is declared in automatic storage, the compiler ensures that the address
meets the alignment requirements of the type. Moreover, dynamically allocated
pointers from malloc() are suitably aligned for any type. So in either case, we
can use that bit of the pointer to store extra information, provided that we
clear that bit whenever we intend to use that value as a pointer.

> so: should alt_lock() set newstatus = &wait_node ¦ 1L

No, to (oldstatus | 1).

> re: recursive_np
> ----------------
> My concern here is that if a program does call pthread_mutex_unlock() "too
> many times" then _pthread_unlock will:
> 
> - fail the while((oldstatus  = lock->__status) == 1) test, with oldstatus =
> 0x00. 
> - ptr will be set to &lock->__status (ok) and thr = oldstatus & ~1L (in this
> instance 0x00). 
> - The while(thr != 0) will fail and the next instruction executed will be 
> - thr->p_nextlock = NULL which means writing into location 0x00. On our
> system that means crash!  

The program has a bug, because it is trying to unlock a mutex that is not
locked.  The behavior is undefined, and a possible, and often likely
consequence of undefined behavior is that the program will crash.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]