This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug malloc/13939] New: Malloc can deadlock in retry paths


http://sourceware.org/bugzilla/show_bug.cgi?id=13939

             Bug #: 13939
           Summary: Malloc can deadlock in retry paths
           Product: glibc
           Version: 2.15
            Status: NEW
          Severity: normal
          Priority: P2
         Component: malloc
        AssignedTo: unassigned@sourceware.org
        ReportedBy: law@redhat.com
    Classification: Unclassified


Created attachment 6312
  --> http://sourceware.org/bugzilla/attachment.cgi?id=6312
Potential fix

Assume we're we're in libc_malloc and the call to _int_malloc has failed
because we're unable to sbrk more memory.

We (sensibly) have code which will attempt to use mmap to allocate the memory
in the inner else clause below:

  arena_lock(ar_ptr, bytes);
  if(!ar_ptr)
    return 0;
  victim = _int_malloc(ar_ptr, bytes);
  if(!victim) {
    /* Maybe the failure is due to running out of mmapped areas. */
    if(ar_ptr != &main_arena) {
      (void)mutex_unlock(&ar_ptr->mutex);
      ar_ptr = &main_arena;
      (void)mutex_lock(&ar_ptr->mutex);
      victim = _int_malloc(ar_ptr, bytes);
      (void)mutex_unlock(&ar_ptr->mutex);
    } else {
      /* ... or sbrk() has failed and there is still a chance to mmap() */
      ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, bytes);
      (void)mutex_unlock(&main_arena.mutex);
      if(ar_ptr) {
        victim = _int_malloc(ar_ptr, bytes);
        (void)mutex_unlock(&ar_ptr->mutex);
      }
    }
  } else
    (void)mutex_unlock(&ar_ptr->mutex);


Make note that the arena referenced by ar_ptr will still be locked when we call
arena_get2 in that else clause.  Furthermore, we know that ar_ptr must refer to
the main arena.

Now assume that there are no arenas on the free list and that we've already hit
the limit for the number of arenas we're willing to create.  In that case
arena_get2 will call reused_arena which looks like this:


static mstate
reused_arena (void)
{
  mstate result;
  static mstate next_to_use;
  if (next_to_use == NULL)
    next_to_use = &main_arena;

  result = next_to_use;
  do
    {
      if (!mutex_trylock(&result->mutex))
        goto out;

      result = result->next;
    }
  while (result != next_to_use);

  /* No arena available.  Wait for the next in line.  */
  (void)mutex_lock(&result->mutex);

So let's make a couple more assumptions.  First, assume that next_to_use refers
to the main arena.  Second assume that all the other arenas are currently
locked by other threads.  And remember that the main arena is locked by the
current thread.

In that case the do-while loop will look at the every arena on the list and
find them all locked.  When it hits result == main_arena, then the loop will
exit and we call mutex_lock to acquire the main arena's lock.  But since the
main arena is already locked by the current thread, we deadlock.

libc_calloc seems to have the same problem.

libc_memalign, libc_calloc, libc_pvalloc all release the lock before calling
arena_get2 in the case where sbrk failed, which seems to be the right thing to
do.

Closely related since fixing the deadlock provides us with a great opportunity
to unify the 5 implementations a little.  For example we have this from
libc_memalign:

      mstate prev = ar_ptr->next ? ar_ptr : 0;
      (void)mutex_unlock(&ar_ptr->mutex);
      ar_ptr = arena_get2(prev, bytes);


Which seems like the right way to go, so I'd like to unify the 5
implementations to have a similar structure.  ie, release the lock within the
conditional just prior to calling arena_get2 and being safe WRT modification of
ar_ptr->next.


So dropping the lock prior to calling arena_get2 avoids the deadlock.  However,
in this case all that's going to happen is the retrying allocation will fail. 
The original allocation was trying to allocate from the main arena and so will
the retrying allocation if it can't acquire a lock for any other arena.

To get this 100% correct would require retrying in every arena, potentially
blocking to acquire the lock at each step.  That seems rather excessive.  So
instead we can just pass in a state variable indicating we're in a retry
situation and if so, avoid retrying in the same arena that just failed.

Attached is the fix we're currently using internally at Red Hat.  It needed
updating slightly as Uli cleaned up malloc.c & arena.c.  However, the basic
structure is the same as what Red Hat is using internally.  I've verified glibc
still builds after adjusting for Uli's cleanups.

-- 
Configure bugmail: http://sourceware.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]