This is the mail archive of the
gdb@sourceware.org
mailing list for the GDB project.
Re: reg: GDB's generate-core-file option
- From: Michael Snyder <msnyder at vmware dot com>
- To: Aarthy <aarthy82 at gmail dot com>
- Cc: "gdb at sourceware dot org" <gdb at sourceware dot org>
- Date: Mon, 28 Sep 2009 21:29:35 -0700
- Subject: Re: reg: GDB's generate-core-file option
- References: <30ca7ede0909281909n40cccc62m8e3c574546f7208f@mail.gmail.com>
Aarthy wrote:
Hi there,
I am currently working on a product which has many processes running
in a multi threaded fashion.
Is it threads? Or forks?
If it's threads, gdb should be able to save all the threads
into a single corefile. Can you attach gdb to the "main" thread?
I run my product on a device which has
200 MB of hard disk space to write the core file. It doesn't have any
swap memory as such. My requirement is that I need to collect the
snapshot of each process at a particular time. So I used
generate-core-file option after attaching each process to GDB.
You didn't say how much ram your device has.
Are you running a bunch of gdbs at the same time?
Or are you running them one after another?
generate-core-file uses a lot of ram -- if you are running
many gdbs at the same time, you might be running out of memory.
The
problem is that for one process i get the following error.
warning: Failed to write corefile contents (No space left on device).
../../gdb/utils.c:1058: internal-error: virtual memory exhausted:
can't allocate 92123136 bytes.
This is an out-of-memory error. It comes from function "nomem()"
in utils.c, and results from the failure of an xmalloc() call.
There is only one xmalloc() call in gcore.c, it looks like this:
memhunk = xmalloc (size);
This is trying to make a copy of one of the program's memory section.
So your program (or this thread or fork) has a memory section
that is 92,123,136 bytes long, and xmalloc can't acquire that
many bytes of free memory (at least in a single hunk).
A problem internal to GDB has been detected,
further debugging may prove unreliable.
I would like to know what could be the reason. I have about 15
processes running. everything except this process i was able to
generate the forced core file.
This is my ulimit -a output,
Linux(debug)# ulimit -a
core file size (blocks, -c) 73242
data seg size (kbytes, -d) 62500
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 26624
virtual memory (kbytes, -v) 125000
If i change the virtual memory size to unlimited i was able to
generate the core. Kindly let me know how exactly the
generate-core-file works.
Regards,
Aarthy.