This is the mail archive of the binutils@sources.redhat.com mailing list for the binutils project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[rfc] limit iterations in compute_bucket_count


While looking at the exceedingly long link times in libgcj, I found
that this function was eating 66% of compilation time.  This because

  nsyms = 34338
  minsize = 8584
  maxsize = 68676
  iteration count = maxsize - minsize = 60092

which I find excessive.  This rather arbitrary patch drops this
function off the profile entirely, and link time to 47 seconds.

(There's still the absurdidty of the time spent in libtool, but
that's an entirely different problem.)

Thoughts?


r~


	* elflink.c (compute_bucket_count): Limit number of test iterations.

Index: elflink.c
===================================================================
RCS file: /cvs/src/src/bfd/elflink.c,v
retrieving revision 1.103
diff -c -p -d -r1.103 elflink.c
*** elflink.c	17 Sep 2004 07:14:30 -0000	1.103
--- elflink.c	29 Sep 2004 20:17:57 -0000
*************** compute_bucket_count (struct bfd_link_in
*** 4530,4539 ****
    if (info->optimize)
      {
        unsigned long int nsyms = hashcodesp - hashcodes;
!       size_t minsize;
!       size_t maxsize;
        BFD_HOST_U_64_BIT best_chlen = ~((BFD_HOST_U_64_BIT) 0);
!       unsigned long int *counts ;
        bfd *dynobj = elf_hash_table (info)->dynobj;
        const struct elf_backend_data *bed = get_elf_backend_data (dynobj);
  
--- 4530,4538 ----
    if (info->optimize)
      {
        unsigned long int nsyms = hashcodesp - hashcodes;
!       size_t minsize, maxsize, step;
        BFD_HOST_U_64_BIT best_chlen = ~((BFD_HOST_U_64_BIT) 0);
!       unsigned long int *counts;
        bfd *dynobj = elf_hash_table (info)->dynobj;
        const struct elf_backend_data *bed = get_elf_backend_data (dynobj);
  
*************** compute_bucket_count (struct bfd_link_in
*** 4545,4550 ****
--- 4544,4554 ----
  	minsize = 1;
        best_size = maxsize = nsyms * 2;
  
+       /* ??? Limit iteration when nsyms is large.  */
+       step = 1;
+       if (nsyms > 1000)
+ 	step = (maxsize - minsize) / 100;
+ 
        /* Create array where we count the collisions in.  We must use bfd_malloc
  	 since the size could be large.  */
        amt = maxsize;
*************** compute_bucket_count (struct bfd_link_in
*** 4559,4565 ****
        /* Compute the "optimal" size for the hash table.  The criteria is a
  	 minimal chain length.  The minor criteria is (of course) the size
  	 of the table.  */
!       for (i = minsize; i < maxsize; ++i)
  	{
  	  /* Walk through the array of hashcodes and count the collisions.  */
  	  BFD_HOST_U_64_BIT max;
--- 4563,4569 ----
        /* Compute the "optimal" size for the hash table.  The criteria is a
  	 minimal chain length.  The minor criteria is (of course) the size
  	 of the table.  */
!       for (i = minsize; i < maxsize; i += step)
  	{
  	  /* Walk through the array of hashcodes and count the collisions.  */
  	  BFD_HOST_U_64_BIT max;


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]