This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[commit] Reduce memory usage for gcore


A user tried to generate a core file for an application that took more than
half of all available RAM.  It didn't work too well: we allocate as much
memory during gcore as the largest contiguous allocation in the inferior.
Easily avoided, as so.  Tested on x86_64-pc-linux-gnu and committed.

Note: I tested this by hand using gdb.base/bigcore.  Don't Do That.  (A)
GDB does not write out sparse files for cores which are mostly zero, so a
core file which the OS can dump as about 2MB actually takes several GB or
more of disk space and quite a lot of time.  (B)  Something in Linux's IO
layer is made very unhappy by this workload; it fails to reclaim dirty
buffers, and the OOM killer kills a couple of things for you.  But, the
patch survived.

-- 
Daniel Jacobowitz
CodeSourcery

2006-10-20  Daniel Jacobowitz  <dan@codesourcery.com>

	* gcore.c (MAX_COPY_BYTES): Define.
	(gcore_copy_callback): Use it to limit allocation.

Index: gcore.c
===================================================================
RCS file: /cvs/src/src/gdb/gcore.c,v
retrieving revision 1.18
diff -u -p -r1.18 gcore.c
--- gcore.c	17 Dec 2005 22:33:59 -0000	1.18
+++ gcore.c	20 Oct 2006 20:53:37 -0000
@@ -1,6 +1,7 @@
 /* Generate a core file for the inferior process.
 
-   Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
+   Copyright (C) 2001, 2002, 2003, 2004, 2005, 2006
+   Free Software Foundation, Inc.
 
    This file is part of GDB.
 
@@ -31,6 +32,11 @@
 
 #include "gdb_assert.h"
 
+/* The largest amount of memory to read from the target at once.  We
+   must throttle it to limit the amount of memory used by GDB during
+   generate-core-file for programs with large resident data.  */
+#define MAX_COPY_BYTES (1024 * 1024)
+
 static char *default_gcore_target (void);
 static enum bfd_architecture default_gcore_arch (void);
 static unsigned long default_gcore_mach (void);
@@ -444,7 +450,8 @@ objfile_find_memory_regions (int (*func)
 static void
 gcore_copy_callback (bfd *obfd, asection *osec, void *ignored)
 {
-  bfd_size_type size = bfd_section_size (obfd, osec);
+  bfd_size_type size, total_size = bfd_section_size (obfd, osec);
+  file_ptr offset = 0;
   struct cleanup *old_chain = NULL;
   void *memhunk;
 
@@ -456,19 +463,35 @@ gcore_copy_callback (bfd *obfd, asection
   if (strncmp ("load", bfd_section_name (obfd, osec), 4) != 0)
     return;
 
+  size = min (total_size, MAX_COPY_BYTES);
   memhunk = xmalloc (size);
   /* ??? This is crap since xmalloc should never return NULL.  */
   if (memhunk == NULL)
     error (_("Not enough memory to create corefile."));
   old_chain = make_cleanup (xfree, memhunk);
 
-  if (target_read_memory (bfd_section_vma (obfd, osec),
-			  memhunk, size) != 0)
-    warning (_("Memory read failed for corefile section, %s bytes at 0x%s."),
-	     paddr_d (size), paddr (bfd_section_vma (obfd, osec)));
-  if (!bfd_set_section_contents (obfd, osec, memhunk, 0, size))
-    warning (_("Failed to write corefile contents (%s)."),
-	     bfd_errmsg (bfd_get_error ()));
+  while (total_size > 0)
+    {
+      if (size > total_size)
+	size = total_size;
+
+      if (target_read_memory (bfd_section_vma (obfd, osec) + offset,
+			      memhunk, size) != 0)
+	{
+	  warning (_("Memory read failed for corefile section, %s bytes at 0x%s."),
+		   paddr_d (size), paddr (bfd_section_vma (obfd, osec)));
+	  break;
+	}
+      if (!bfd_set_section_contents (obfd, osec, memhunk, offset, size))
+	{
+	  warning (_("Failed to write corefile contents (%s)."),
+		   bfd_errmsg (bfd_get_error ()));
+	  break;
+	}
+
+      total_size -= size;
+      offset += size;
+    }
 
   do_cleanups (old_chain);	/* Frees MEMHUNK.  */
 }


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]