This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH 4/6] .gdb_index prod perf regression: find before insert in unordered_map


"perf" shows the unordered_map::emplace call in write_hash_table a bit
high up on profiles.  Fix this using the find + insert idiom instead
of going straight to insert.

I tried doing the same to the other unordered_maps::emplace calls in
the file, but saw no performance improvement, so left them be.

With a '-g3 -O2' build of gdb, and:

  $ cat save-index.cmd
  set $i = 0
  while $i < 100
    save gdb-index .
    set $i = $i + 1
  end
  $ time ./gdb -data-directory=data-directory -nx --batch -q -x save-index.cmd  ./gdb.pristine

I get an improvement of ~7%:

  ~7.0s => ~6.5s (average of 5 runs).

gdb/ChangeLog:
2017-06-12  Pedro Alves  <palves@redhat.com>

	* dwarf2read.c (write_hash_table): Check if key already exists
	before emplacing.
---
 gdb/ChangeLog    |  5 +++++
 gdb/dwarf2read.c | 21 ++++++++++++++++-----
 2 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/gdb/ChangeLog b/gdb/ChangeLog
index 4c8657c..01b66a1 100644
--- a/gdb/ChangeLog
+++ b/gdb/ChangeLog
@@ -1,5 +1,10 @@
 2017-06-12  Pedro Alves  <palves@redhat.com>
 
+	* dwarf2read.c (write_hash_table): Check if key already exists
+	before emplacing.
+
+2017-06-12  Pedro Alves  <palves@redhat.com>
+
 	* dwarf2read.c (data_buf::append_space): Rename to...
 	(data_buf::grow): ... this, and make private.  Adjust all callers.
 	(data_buf::append_uint): New method.
diff --git a/gdb/dwarf2read.c b/gdb/dwarf2read.c
index 63a591e..93fd275 100644
--- a/gdb/dwarf2read.c
+++ b/gdb/dwarf2read.c
@@ -23430,11 +23430,22 @@ write_hash_table (mapped_symtab *symtab, data_buf &output, data_buf &cpool)
 	if (it == NULL)
 	  continue;
 	gdb_assert (it->index_offset == 0);
-	const auto insertpair
-	  = symbol_hash_table.emplace (it->cu_indices, cpool.size ());
-	it->index_offset = insertpair.first->second;
-	if (!insertpair.second)
-	  continue;
+
+	/* Finding before inserting is faster than always trying to
+	   insert, because inserting always allocates a node, does the
+	   lookup, and then destroys the new node if another node
+	   already had the same key.  C++17 try_emplace will avoid
+	   this.  */
+	const auto found
+	  = symbol_hash_table.find (it->cu_indices);
+	if (found != symbol_hash_table.end ())
+	  {
+	    it->index_offset = found->second;
+	    continue;
+	  }
+
+	symbol_hash_table.emplace (it->cu_indices, cpool.size ());
+	it->index_offset = cpool.size ();
 	cpool.append_data (MAYBE_SWAP (it->cu_indices.size ()));
 	for (const auto iter : it->cu_indices)
 	  cpool.append_data (MAYBE_SWAP (iter));
-- 
2.5.5


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]