This is the mail archive of the gdb-patches@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: RFA: lin-lwp bug with software-single-step or schedlock


This bug was noticed on MIPS, because MIPS GNU/Linux is
SOFTWARE_SINGLE_STEP_P. There's a comment in lin_lwp_resume:

/* Apparently the interpretation of PID is dependent on STEP: If
STEP is non-zero, a specific PID means `step only this process
id'. But if STEP is zero, then PID means `continue *all*
processes, but give the signal only to this one'. */
resume_all = (PIDGET (ptid) == -1) || !step;

Now, I did some digging, and I believe this comment is completely incorrect. Saying "signal SIGWINCH" causes PIDGET (ptid) == -1, and it is assumed the
signal will be delivered to inferior_ptid. There's some other problem there
- I think I've discovered that we will neglect to single-step over a
breakpoint if we are told to continue with a signal, which is a bit dubious
of a decision - but by and large it works as expected.

So if STEP is 0, we always resume all processes. STEP at this point _only_
refers to whether we want a PTRACE_SINGLESTEP or equivalent;
SOFTWARE_SINGLE_STEP has already been handled. We can't make policy
decisions based on STEP any more.

I tried removing the || !step. It's pretty hard to tell, since there are
still a few non-deterministic failures on my test systems (which is what I
was actually hunting when I found this!) but I believe testsuite results are
improved on i386. One run of just the thread tests (after the patch in my
last message, which I've committed), shows that these all got fixed:
Shouldn't, per the remote.c Hg discussion, the code be changed so that lin_lwp_resume() has complete information and, hence, can correctly determine if resume all/one is needed.

Andrew



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]