This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug libc/11787] Program with large TLS segment fails aio_write


http://sourceware.org/bugzilla/show_bug.cgi?id=11787

--- Comment #13 from Carlos O'Donell <carlos_odonell at mentor dot com> 2012-03-24 16:05:19 UTC ---
(In reply to comment #12)
> (In reply to comment #8)
> 
> > Please note that the aio helper thread *should* be using a default 2MB stack on
> > x86, not 16K, I don't see anywhere that sets the helper threads stack to 16K.
> 
> The helper thread stack size *is* set by __pthread_get_minstack.

Thanks, I just spotted that in nptl/sysdeps/unix/sysv/linux/aio_misc.h.

> This was done here:
> 
> 2011-12-22  Ulrich Drepper  <drepper@gmail.com>
> 
>        [BZ #13088]
>        * sysdeps/unix/sysv/linux/timer_routines.c: Get minimum stack size
>        through __pthread_get_minstack.
>        * nptl-init.c (__pthread_initialize_minimal_internal): Get page size
>        directly from _rtld_global_ro.
>        (__pthread_get_minstack): New function.
>        * pthreadP.h: Declare __pthread_get_minstack.
>        * Versions (libpthread) [GLIBC_PRIVATE]: Add __pthread_get_minstack.
> 
> and is *precisely* the change we are asking for for the other threads.

Thanks for pointing this out.

> I see that Rich Felker is in exact agreement with me:
> http://sourceware.org/bugzilla/show_bug.cgi?id=13088#c1
> 
> > You are also increasing the memory footprint of all applications that use TLS
> > that worked fine before.
> 
> It depends on what you call "memory footprint". Yes, we'll increase memory
> we *reserve* for stacks (VM size). But we will not *use* any more memory
> than what's already used (RSS).

Reservations still consume VM space and limit the maximum number of threads.
Reservation is still a serious problem for kernel VM management and layout. We
should not increase the reserved VM space without due consideration.

> I think the only application that would notice this is one that was almost
> running out of VM space. An answer for such app would be to decrease its
> default stack sizes, use smaller number of threads, or switch to 64 bits.

Not true. What about applications with *lots* of threads, each thread running
computational kernels (little real stack usage), and lots of thread-local data.
In those cases you could be doubling the reservation per thread and causing the
application to be unable to spawn a larger number of threads.

> > Before making any changes we need to hear from the distros (RH, SuSE, Debian,
> > Gentoo, Ubuntu etc) about the kind of impact this might have e.g. a quick find
> > / readelf -a / grep to determine on average the memory increase we are looking
> > at.
> 
> Would above still apply if it's just VM size we are talking about?
> I don't see how readelf will reveal anything.

Yes. We need to get input from the distros. We should not make a change like
this without talking to them.

You use readelf to find the TLS segment and see how much bigger the per-thread
memory reservation will be and collect this data for as many applications as
possible to see the expected real-world impact.

> > There are multiple cases here.
> 
> > You appear to be suggesting the following:
> > 
> > For (a) the behaviour remains the same.
> > 
> > For (b) we adjust upward by the size of the static TLS data.
> > 
> > For (c) "
> > 
> > For (d) "
> > 
> > For (e) "
> 
> Yes, I believe that's what we are proposing.

I'm glad to see that I understand the problem.

Cheers,
Carlos.

-- 
Configure bugmail: http://sourceware.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]