This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH][RFC] Allow explicit shrinking of arena heaps using anenvironment variable


Hi,

The current arena-per-thread implementation maps a 64M map (on x86_64)
and makes most of it PROT_NONE initially. When malloc requests need to
be serviced from this map, contiguous portions are given read+write
permissions. When memory at the end of the map is freed, madvise() is
called on the consolidated region to notify the kernel that we don't
want to use that part. For setuid programs, we go a step further and
ensure that the consolidated region to be freed has no permissions, by
calling mmap() on it with MAP_FIXED and PROT_NONE.

This patch extends this functionality for setuid programs to all
programs through the use of an environment variable
MALLOC_ARENA_SHRINK. There are two motivations for this:

1. Make sure that the consolidated sections are unusable and hence
   simulate the process heap much more closely, regardless of whether
   the program is setuid or not.
2. One may want to look at memory maps in /proc/PID/maps and get an
   estimate of how much of it is in use, similar to the main process
   heap. The main process heap *actually* shrinks and hence it is easy
   to see how much of it is in use, but for the arenas, there is
   currently no way to guess this. Until now. The way to identify the
   vmas that are acting as arenas is to collate /proc/PID/maps data with
   the strace of the program.

The default behaviour remains as before -- this new behaviour is only
seen when MALLOC_ARENA_SHRINK is exported and set to a positive value.

I have verified that the patch does not cause any regressions on
x86_64. I verified this functionality using the following sample
program:

#include <pthread.h>
#include <stdlib.h>
#include <string.h>

void *thr (void *unused)
{
  void *m = malloc (64*1024);
  printf ("returnd %p\n", m);
  memset (m, 0, 64*1024);
  free (m);
  m = malloc (4);
  printf ("returnd %p\n", m);
  memset (m, 1, 4);

  while (1) sleep (10);
}

int main ()
{
  pthread_t t;

  pthread_create (&t, NULL, thr, NULL);
  pthread_join (t, NULL);
  return 0;
}

When the program is compiled and executed, one can see an extra vma
in the arenas in /proc/PID/maps, that has no permissions. Ideally the
kernel should be consolidating the two vmas that are PROT_NONE (one
resulting from the initial vma creation and the other due to
consolidation). That is a cosmetic issue though, since one could easily
see that three, instead of two contiguous vmas form the arena in
question. I may take a look at the kernel end some time later to try and
fix the vma consolidation if this looks like a good idea.

Regards,
Siddhesh

ChangeLog:

2012-07-20  Siddhesh Poyarekar  <siddhesh@redhat.com>

	* malloc/arena.c (ptmalloc_init): Look for MALLOC_ARENA_SHRINK
	environment variable.
	(shrink_heap): Shrink heap if arena_shrink is enabled.
	* malloc/malloc.c (struct malloc_par): New member arena_shrink.
	(M_ARENA_SHRINK): New mallopt parameter.
	(__libc_mallopt): Add case for M_ARENA_SHRINK.
	* malloc/malloc.h (M_ARENA_SHRINK): New mallopt parameter.
diff --git a/malloc/arena.c b/malloc/arena.c
index 33c4ff3..f556a0c 100644
--- a/malloc/arena.c
+++ b/malloc/arena.c
@@ -436,6 +436,13 @@ ptmalloc_init (void)
 		    __libc_mallopt(M_ARENA_TEST, atoi(&envline[11]));
 		}
 	      break;
+	    case 12:
+	      if (! __builtin_expect (__libc_enable_secure, 0))
+		{
+		  if (memcmp (envline, "ARENA_SHRINK", 12) == 0)
+		    __libc_mallopt(M_ARENA_SHRINK, atoi(&envline[13]));
+		}
+	      break;
 #endif
 	    case 15:
 	      if (! __builtin_expect (__libc_enable_secure, 0))
@@ -613,6 +620,12 @@ grow_heap(heap_info *h, long diff)
 static int
 shrink_heap(heap_info *h, long diff)
 {
+#ifdef PER_THREAD
+# define DO_SHRINK __builtin_expect (mp_.arena_shrink, 0)
+#else
+#define DO_SHRINK 0
+#endif
+
   long new_size;
 
   new_size = (long)h->size - diff;
@@ -620,7 +633,7 @@ shrink_heap(heap_info *h, long diff)
     return -1;
   /* Try to re-map the extra heap space freshly to save memory, and
      make it inaccessible. */
-  if (__builtin_expect (__libc_enable_secure, 0))
+  if (__builtin_expect (__libc_enable_secure, 0) || DO_SHRINK)
     {
       if((char *)MMAP((char *)h + new_size, diff, PROT_NONE,
 		      MAP_FIXED) == (char *) MAP_FAILED)
@@ -633,6 +646,8 @@ shrink_heap(heap_info *h, long diff)
 
   h->size = new_size;
   return 0;
+
+#undef DO_SHRINK
 }
 
 /* Delete a heap. */
diff --git a/malloc/malloc.c b/malloc/malloc.c
index 28039b4..9d74f50 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -1732,6 +1732,7 @@ struct malloc_par {
 #ifdef PER_THREAD
   INTERNAL_SIZE_T  arena_test;
   INTERNAL_SIZE_T  arena_max;
+  INTERNAL_SIZE_T  arena_shrink;
 #endif
 
   /* Memory map support */
@@ -1785,6 +1786,7 @@ static struct malloc_par mp_ =
 /*  Non public mallopt parameters.  */
 #define M_ARENA_TEST -7
 #define M_ARENA_MAX  -8
+#define M_ARENA_SHRINK -9
 #endif
 
 
@@ -4781,6 +4783,11 @@ int __libc_mallopt(int param_number, int value)
     if (value > 0)
       mp_.arena_max = value;
     break;
+
+  case M_ARENA_SHRINK:
+    if (value > 0)
+      mp_.arena_shrink = 1;
+    break;
 #endif
   }
   (void)mutex_unlock(&av->mutex);
diff --git a/malloc/malloc.h b/malloc/malloc.h
index 02b28c7..21c8b29 100644
--- a/malloc/malloc.h
+++ b/malloc/malloc.h
@@ -138,6 +138,7 @@ extern struct mallinfo mallinfo (void) __THROW;
 #define M_PERTURB	    -6
 #define M_ARENA_TEST	    -7
 #define M_ARENA_MAX	    -8
+#define M_ARENA_SHRINK	    -9
 
 /* General SVID/XPG interface to tunable parameters. */
 extern int mallopt (int __param, int __val) __THROW;

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]