This is the mail archive of the libc-alpha@sources.redhat.com mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Reference counting bug in ld-linux.so.2 (2.1.2, 2.1.3, 2.1.9x, et. al.)


> I've done some more extensive testing and have found that it looks like
this
> patch introduces the opposite problem.  It seems that I can now induce a
> situation where a shared object can be unloaded too soon.  I will send a
> test case soon that will demonstrate the problem.  I am also looking at a
> potential patch now that I've got a more clear understanding of how the
code
> works.  It appears that the reference counting is not handled in a
> symmetrical fashion.  Reference counts are bumped when the search lists
are
> built regardless of whether or not the object is already mapped into
memory
> by some other implicit load.   The loader then uses the fact that the
search
> list is or is not built to add references to dependent object.  I think
this
> may be the base of the problem.  Reference counts should be managed
> independently from the building of the search list.  I will do some more
> debugging and post my findings.

Attached is a new test case (based on my previous test case) that
demonstrates the new problem I alluded to above.  I also have a patch that
seems to fix it.  Basically, in dl_open_worker, when returning from
_dl_map_object and the search list is already built, the dependent objects'
reference counts must be bumped by one, just like in _dl_close, they must be
decremented even if the object is not going away.  I would post the patch,
except I'm currently working with 2.1.94 (patched RedHat 7.0), along with
2.1.2 and 2.1.3.

 ------
Allen Bauer.
Delphi/C++Builder/Kylix Sr. Staff Engineer.
Delphi/C++Builder/Kylix IDE R&D Manager.
Inprise/Borland Corporation.

new-loader-bug.tar.gz


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]