This is the mail archive of the libc-hacker@sourceware.cygnus.com mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

I will take a vacation (gdb and linuxthreads)


> 
> > I think it is doable. We have
> > kernel	glibc		# of signals 		gdb
> > -------------------------------------------------------------
> > 2.0		2.0		2			work
> > 2.0		2.1		2			doesn't work
> > 2.1		2.0		2			work
> > 2.1		2.1		3			work.
> > The only case gdb doesn't is glibc 2.1 under linux 2.0 due to
> > __pthread_sig_cancel. I think it is ok. If you can modify
> > Linuxthreads, I will take care of gdb.
> 
> We have a deal.  I'll do that next week.

I will take a vacation starting from tomorrow. I will be back on Jan.
6. I will work on that after that.

BTW, there is still a problem with gdb. Linuxthreads in 2.1 calls
__clone with CLONE_PTRACE. It totally breaks gdb. gdb needs to attach
the clone but fails to do so since it is already ptraced due to
CLONE_PTRACE.

> 
> Concerning ex6: I ran it 1500 times (all night long) on a bi-Pentium
> Pro and it did not deadlock.  But: this machine is running kernel 2.0.36.
> I'll see with its owner if it's possible to put a recent 2.1 kernel on
> it.
> 
> Still, the bug could have something to do with the realtime signals.
> For one thing, RT signals are queued while regular signals are not, so
> bugs causing a signal to be sent twice instead of once might go
> unnoticed with regular signals, but break things with RT signals.
> Also, I'm afraid RT signals haven't been tested much, so there might
> be bugs in the kernel related to RT signals.
> 
> Do you think you could test ex6 with RT signals turned off in
> LinuxThreads?  E.g. just under SIGRTMIN in pthread.c and see what
> happens.

Ulrich, can you give it a try? I will shutdown my machines shortly.

Thanks.


-- 
H.J. Lu (hjl@gnu.org)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]