This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC] GDB performance testing infrastructure


On 08/22/2013 04:38 AM, Tom Tromey wrote:
Yao>   + GDB load a python script, in which some operations are performed and
Yao>     performance data (time and memory usage) is collected into a file.
Yao>     The performance test is driven by python, because GDB has a good
Yao>     python binding now.  We can use python too to collect performance
Yao>     data, process them and draw graph, which is very convenient.

I wonder whether there are cases where the needed API isn't readily
exposed to Python.

I suppose that is motivation to add them though:-)

Right, as we write more and more test cases, we do need more python
APIs for different components in GDB.


Yao>     2. When we test the performance of GDB reading symbols in and
Yao>        looking for symbols, we either can fake a lot of debug
Yao>        information in the executable or fake a lot of `objfile',
Yao>        `symtab' and `symbol' in GDB.  we may extend `jit.c' to add
Yao>        symbols on the fly.  `jit.c' is able to add `objfile' and
Yao>        `symtab' to GDB from external reader.  We can factor this part to
Yao>        add `objfile', `symtab', and `symbol' to GDB for the performance
Yao>        testing purpose.  However, I may be wrong.

I tend to think it is better to go through the normal symbol reading
paths.  The JIT code does things specially; and performance testing that
may not show improvements or regressions in "ordinary" uses.


I am OK on this way.  On each machine the performance testing is
deployed, people have to find some large executables with debug info,
and track the performance of GDB loading them and searching for some
symbols.

Yao>   * Run `single-step' with GDBserver
Yao>   ,----
Yao>   | $ make check RUNTESTFLAGS='--target_board=native-gdbserver single-step.exp'

Do you anticipate that these tests will be run by default?


No.

One concern I have is that if we generate truly large test cases, then
running the test suite could become quite painful.  Also, it seems that
performance tests are best run on a quiet system -- so running them by
default may in general not yield worthwhile data.

I plan to add a new makefile target 'check-perf' to run all performance testing cases.


Yao>   Here is the performance data, and each row is about the time usage of
Yao>   handling loading and unloading a certain number of shared libraries.
Yao>   We can use this data to track the performance of GDB on handling
Yao>   shared libraries.

Yao>   ,----
Yao>   | solib 128 in 0.53
Yao>   | solib 256 in 1.94
Yao>   | solib 512 in 8.31
Yao>   | solib 1024 in 47.34
Yao>   | solib 2048 in 384.75
Yao>   `----

Perhaps the .py code can deliver Python objects to some test harness
rather than just printing data free-form?  Then we can emit the data in
more easily manipulated forms.

Agreed. In my experiments, I save test result in a python object, and print them into plain text or json format later.

I'll post patches soon...

--
Yao (éå)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]