This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: record-btrace


On Tue, 29 Jan 2013 12:03:40 +0100, Metzger, Markus T wrote:
> "target record-full"
> "target record-btrace"
> "target record" aliases "target record-full"
> "record full"
> "record btrace"
> " record" aliases "record full"

Maybe "record" and "target record" could already print an error about the
command being deprecated; but that is a subtle difference, I do not mind.


> If we now want to add LBR-based branch tracing, we could either change
> record-btrace to record-btrace-bts and add record-btrace-lbr or make "bts"
> and "lbr" configuration options of record-btrace.

If we had LBR I believe it should be always enabled by default - when
available - as it has AFAIK no measurable performance disadvantage.  So the
way of enabling it explicitly should not matter much.


> > OK; therefore two btrace struct target_ops for linux-nat.c vs. remote.c and one
> > btrace struct target_ops for record btrace (vs. full).
> 
> If you're OK to keep the btrace target_ops methods, we shouldn't need separate target_ops structs for native and remote. The btrace methods will be supplied by the native and remote targets. The record-btrace target will be on top  and use the btrace methods from the target beneath.

[ OT: Could you wrap the lines to 80 columns?  It is the gdb-patches mailing
list style.  ]

So there will be:
amd64-linux-nat.c:
  current:
  t->to_enable_btrace = amd64_linux_enable_btrace;
  new one:
  t->to_record_list = btrace_record_list;

remote.c:
  current:
  remote_ops.to_enable_btrace = remote_enable_btrace;
  new one:
  t->to_record_list = btrace_record_list;

record-full.c:
  new one, to be implemented in the future:
  t->to_record_list = record_full_list;

[detail]
Initialization of the common fields like to_record_list could be moved to some
common initialization function like i386_use_watchpoints().


> > I would not be too strict with accessing the inferior memory.  I understand
> > that the memory content may be different when GDB is in the btrace history but
> > most of the memory including read/write variables a user may want to read will
> > be the same.
> >
> > It could rather just print a CLI (the default command-line interface) warning
> > if one accesses a read/write memory during command execution.
> > 
> > Forbidding any memory access may be correct but it may be pain for the users.
> 
> I would find this very confusing. Stack and heap might not be available any
> more - or might contain different objects.

The problem is if you are in history and you want to print a variable - should
you "continue" back to the current state, read the variable and move back?


> I can do the CLI warning as first step. Will this warning also be available
> via MI?

It will be printed as general GDB output record but at least Eclipse CDT users
won't notice it.  It gets displayed only after selecting Debug window -> "gdb"
and then it is in the displayed window "Console" as a "warning: ..." text.


> Do you know whether gdb is reading memory for some internal logic?

You can see it by "set debug target 1".  GDB still inserts/removes various
internal breakpoints ("maintenance info breakpoints")
due to "set breakpoint always-inserted" but that should be suppressed by the
record.c to_insert_breakpoint wrapper.


> That can easily become quite annoying if you have several threads. We could
> query the user if he wants to continue even though some threads are
> somewhere in the history.

Yes, that seems as the best choice.


> How would this work for MI?

Such questions get default-answered (nquery->n, yquery->y) when in MI mode.


> Is scheduler-locking implemented by the step commands or by the target's
> to_resume and to_wait functions?

target's to_resume PTID parameter specifies whether all processes or all
threads of the specified process or just one specified thread get resumed.


> I have only considered a single inferior, so far.

That should generally work even with multiple inferiors, each thread of each
inferior is has globally unique PTID so as long as you track the threads
separately and use ptid_match() for the given PTID comparison it works.


Thanks,
Jan


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]