This is the mail archive of the libc-ports@sources.redhat.com mailing list for the libc-ports project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Optimize libc_lock_lock for MIPS XLP.


On 15/06/2012, at 2:49 PM, Maxim Kuvyrkov wrote:

> On 15/06/2012, at 2:44 PM, Chris Metcalf wrote:
> 
>> On 6/14/2012 9:20 PM, Maxim Kuvyrkov wrote:
> ...
>>> As I read it, in case of a contended lock __lll_lock_wait will reset the value of the lock to "2" before calling lll_futex_wait().  I agree that there is a timing window in which the other threads will see a value of the lock greater than "2", but the value will not get as high as hundreds or billions as it will be constantly reset to "2" in atomic_exchange in lll_lock_wait().
>>> 
>>> I do not see how threads will get into a busywait state, though.  Would you please elaborate on that?
>> 
>> You are correct.  I was thinking the that the while loop had a cmpxchg, but
>> since it's just a straight-up exchange, the flow will be something like:
>> 
>> - Fail to initially call lll_futex_wait() if the lock is contended
>> - Fall through to while loop
>> - Spin as long as the lock is contended enough that *futex > 2
>> - Enter futex_wait
>> 
>> So a little busy under high contention, but probably settles out reasonably
>> well.
> 

Attached is an improved patch that also optimizes __libc_lock_trylock using XLP's atomic instructions.

The patch also removes unnecessary indirection step represented by new macros lll_add_lock, which is then used to define __libc_lock_lock, and defines __libc_lock_lock and __libc_lock_trylock directly in lowlevellock.h .  This makes changes outside of ports/ trivial.

Tested on MIPS XLP with no regressions.  OK to apply for 2.17?

--
Maxim Kuvyrkov
CodeSourcery / Mentor Graphics


Allow overrides of __libc_lock_lock and __libc_lock_trylock.

	* nptl/sysdeps/pthread/bits/libc-lockP.h (__libc_lock_lock)
	(__libc_lock_trylock): Allow pre-existing definitions.
---
 nptl/sysdeps/pthread/bits/libc-lockP.h |    8 ++++++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/nptl/sysdeps/pthread/bits/libc-lockP.h b/nptl/sysdeps/pthread/bits/libc-lockP.h
index 0ebac91..9c61662 100644
--- a/nptl/sysdeps/pthread/bits/libc-lockP.h
+++ b/nptl/sysdeps/pthread/bits/libc-lockP.h
@@ -176,8 +176,10 @@ typedef pthread_key_t __libc_key_t;
 
 /* Lock the named lock variable.  */
 #if !defined NOT_IN_libc || defined IS_IN_libpthread
-# define __libc_lock_lock(NAME) \
+# ifndef __libc_lock_lock
+#  define __libc_lock_lock(NAME) \
   ({ lll_lock (NAME, LLL_PRIVATE); 0; })
+# endif
 #else
 # define __libc_lock_lock(NAME) \
   __libc_maybe_call (__pthread_mutex_lock, (&(NAME)), 0)
@@ -189,8 +191,10 @@ typedef pthread_key_t __libc_key_t;
 
 /* Try to lock the named lock variable.  */
 #if !defined NOT_IN_libc || defined IS_IN_libpthread
-# define __libc_lock_trylock(NAME) \
+# ifndef __libc_lock_trylock
+#  define __libc_lock_trylock(NAME) \
   lll_trylock (NAME)
+# endif
 #else
 # define __libc_lock_trylock(NAME) \
   __libc_maybe_call (__pthread_mutex_trylock, (&(NAME)), 0)
-- 
1.7.4.1

Optimize libc_lock_lock for XLP.

2012-06-28  Tom de Vries  <vries@codesourcery.com>
	    Maxim Kuvyrkov  <maxim@codesourcery.com>

	* sysdeps/unix/sysv/linux/mips/nptl/lowlevellock.h (__libc_lock_lock)
	(__libc_lock_trylock): Define for XLP.
---
 sysdeps/unix/sysv/linux/mips/nptl/lowlevellock.h |   39 ++++++++++++++++++++-
 1 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/sysdeps/unix/sysv/linux/mips/nptl/lowlevellock.h b/sysdeps/unix/sysv/linux/mips/nptl/lowlevellock.h
index 88b601e..a441e6b 100644
--- a/sysdeps/unix/sysv/linux/mips/nptl/lowlevellock.h
+++ b/sysdeps/unix/sysv/linux/mips/nptl/lowlevellock.h
@@ -1,5 +1,4 @@
-/* Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008,
-   2009 Free Software Foundation, Inc.
+/* Copyright (C) 2003-2012 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
    The GNU C Library is free software; you can redistribute it and/or
@@ -291,4 +290,40 @@ extern int __lll_timedwait_tid (int *, const struct timespec *)
     __res;						\
   })
 
+#ifdef _MIPS_ARCH_XLP
+/* Implement __libc_lock_lock using exchange_and_add, which expands into
+   a single LDADD instruction on XLP.  This is a simplified expansion of
+   ({ lll_lock (NAME, LLL_PRIVATE); 0; }).
+
+   __lll_lock_wait_private() resets lock value to '2', which prevents unbounded
+   increase of the lock value and [with billions of threads] overflow.
+
+   As atomic.h currently only supports a full-barrier atomic_exchange_and_add,
+   using a full-barrier operation instead of an acquire-barrier operation is
+   not beneficial for MIPS in general.  Limit this optimization to XLP for
+   now.  */
+# define __libc_lock_lock(NAME)						\
+  ({									\
+    int *__futex = &(NAME);						\
+    if (__builtin_expect (atomic_exchange_and_add (__futex, 1), 0))	\
+      __lll_lock_wait_private (__futex);				\
+    0;									\
+  })
+
+# define __libc_lock_trylock(NAME)					\
+  ({									\
+  int *__futex = &(NAME);						\
+  int __result;								\
+  if (atomic_exchange_and_add (__futex, 1) == 0)			\
+    __result = 0;							\
+  else									\
+    /* The lock is already locked.  Set it to 'contended' state to avoid \
+       unbounded increase from subsequent trylocks.  This slightly degrades \
+       performance of locked-but-uncontended case, as lll_futex_wake() will be \
+       called unnecessarily.  */					\
+    __result = (atomic_exchange_acq (__futex, 2) != 0);			\
+  __result;								\
+  })
+#endif
+
 #endif	/* lowlevellock.h */
-- 
1.7.4.1


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]