This is the mail archive of the systemtap@sources.redhat.com mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [KPROBE][RFC] Tweak to the function return probe design


Rusty Lynch wrote:
From my experiences with adding return probes to x86_64 and ia64, and the
feedback on LKML to those patches, I think we can simplify the design
for return probes.

The following patch tweaks the original design such that:

* Instead of storing the stack address in the return probe instance, the
  task pointer is stored.  This gives us all we need in order to:
    - find the correct return probe instance when we enter the trampoline
      (even if we are recursing)
    - find all left-over return probe instances when the task is going away

  This has the side effect of simplifying the implementation since more
  work can be done in kernel/kprobes.c since architecture specific knowlege
  of the stack layout is no longer required.  Specifically, we no longer have:
	- arch_get_kprobe_task()
	- arch_kprobe_flush_task()
	- get_rp_inst_tsk()
	- get_rp_inst()
	- trampoline_post_handler() <see next bullet>

* Instead of splitting the return probe handling and cleanup logic across
the pre and post trampoline handlers, all the work is pushed into the pre function (trampoline_probe_handler), and then we skip single stepping
the original function. In this case the original instruction to be single
stepped was just a NOP, and we can do without the extra interruption.


The new flow of events to having a return probe handler execute when a target
function exits is:

* At system initialization time, a kprobe is inserted at the beginning of
  kretprobe_trampoline (this has not changed)

* register_kretprobe() will insert a kprobe at the beginning of the targeted
  function with the kprobe pre_handler set to arch_prepare_kretprobe
  (still no change)

* When the target function is entered, the kprobe is fired, calling
  arch_prepare_kretprobe (still no change)

* In arch_prepare_kretprobe() we try to get a free instance and if one is
  available then we fill out the instance with a pointer to the return probe,
  the original return address, and a pointer to the task structure (instead
  of the stack address.)  Just like before we change the return address
  to the trampoline function and mark the instance as used.

If multiple return probes are registered for a given target function,
then arch_prepare_kretprobe() will get called multiple times for the same task (since our kprobe implementation is able to handle multiple kprobes at the same address.) Past the first call to arch_prepare_kretprobe, we end up with the original address stored in the return probe instance
pointing to our trampoline function. (This is a significant difference
from the original arch_prepare_kretprobe design.)


* Target function executes like normal and then returns to kretprobe_trampoline.

* kprobe inserted on the first instruction of kretprobe_trampoline is fired
  and calls trampoline_probe_handler() (no change here)

* trampoline_probe_handler() looks up the newest return probe instance associated with the current task and then:
- calls the registered handler function
- sets the pt_regs instruction pointer back to the original return address
- marks the instance as free
- returns in a way that the single stepping of the original
instruction is skipped


If there were multiple kretprobes registered for the target function, then the original return address would be trampoline_probe_handler, causing
the next instance to be handled until finally the real return address
is restored.


(Huge change)
* If the task is killed with some left-over return probe instances (meaning
that a target function was entered, but never returned), then we just
free any instances associated with the task. (Not much different other
then we can handle this without calling architecture specific functions.)


BUT... I just mark each of the instances as unused without bothering to restore their original return address. The original implementation would restore the return address for each instance (I assume because it was thought that the target function could still be running and could still return.)

On i386 we do this cleanup from process.c:exit_thread() and process.c:flush_thread(). Let me know if I am wrong, but I do not think
at this point in the task lifecycle that it is possible for the the
target functions to continue after this point and get into trouble
because of a wrong return address.


(Significant change)

This patch applies to the 2.6.12-rc6-mm1 kernel, but only for the i386
architecture.  I know this approach will work on x86_64 and ia64, and I
haven't the slightest clue if this is ok for ppc64.

Great! The code is much leaner and meaner. And, here is the ppc64 implementation (yes, it works!). Patch applies with a minor fuzz on 2.6.12-rc6-mm1 'cos some other ppc64 cleanups will show up only in -mm2.

Ananth


PPC64 version of Rusty's modified retprobe patch

 arch/ppc64/kernel/kprobes.c |   65 ++++++++++++++++++++++++++++++++++++++++++++
 arch/ppc64/kernel/process.c |    4 ++
 include/asm-ppc64/kprobes.h |    3 ++
 3 files changed, 72 insertions(+)

Index: linux-2.6.12-rc6/arch/ppc64/kernel/kprobes.c
===================================================================
--- linux-2.6.12-rc6.orig/arch/ppc64/kernel/kprobes.c	2005-06-08 17:21:05.000000000 -0400
+++ linux-2.6.12-rc6/arch/ppc64/kernel/kprobes.c	2005-06-08 17:49:55.000000000 -0400
@@ -109,6 +109,23 @@ static inline void restore_previous_kpro
 	kprobe_saved_msr = kprobe_saved_msr_prev;
 }
 
+void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs)
+{
+	struct kretprobe_instance *ri;
+
+	if ((ri = get_free_rp_inst(rp)) != NULL) {
+		ri->rp = rp;
+		ri->task = current;
+		ri->ret_addr = (void *)regs->link;
+
+		/* Replace the return addr with trampoline addr */
+		regs->link = (unsigned long)kretprobe_trampoline;
+		add_rp_inst(ri);
+	} else {
+		rp->nmissed++;
+	}
+}
+
 static inline int kprobe_handler(struct pt_regs *regs)
 {
 	struct kprobe *p;
@@ -199,6 +216,54 @@ no_kprobe:
 }
 
 /*
+ * Function return probe trampoline:
+ * 	- init_kprobes() establishes a probepoint here
+ * 	- When the probed function returns, this probe
+ * 		causes the handlers to fire
+ */
+void kretprobe_trampoline_holder(void)
+{
+	asm volatile(".global kretprobe_trampoline\n"
+			"kretprobe_trampoline:\n"
+			"nop\n");
+}
+
+/*
+ * Called when the probe at kretprobe trampoline is hit
+ */
+int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
+{
+	struct kretprobe_instance *ri = NULL;
+	struct hlist_head *head;
+	struct hlist_node *node;
+
+	head = kretprobe_inst_table_head(current);
+
+	/*
+	 * The first instance associated with the task is the
+	 * instance we need for this function return
+	 */
+	hlist_for_each_entry(ri, node, head, hlist)
+		if (ri->task == current)
+			break;
+
+	BUG_ON(!ri);
+
+	if (ri->rp && ri->rp->handler)
+		ri->rp->handler(ri, regs);
+
+	regs->nip = (unsigned long)ri->ret_addr;
+	recycle_rp_inst(ri);
+
+	unlock_kprobes();
+	/*
+	 * By returning a non-zero value, we are telling
+	 * kprobe_handler() that we have unlocked kprobes
+	 */
+	return 1;
+}
+
+/*
  * Called after single-stepping.  p->addr is the address of the
  * instruction whose first byte has been replaced by the "breakpoint"
  * instruction.  To avoid the SMP problems that can occur when we
Index: linux-2.6.12-rc6/arch/ppc64/kernel/process.c
===================================================================
--- linux-2.6.12-rc6.orig/arch/ppc64/kernel/process.c	2005-06-08 17:17:01.000000000 -0400
+++ linux-2.6.12-rc6/arch/ppc64/kernel/process.c	2005-06-08 17:24:24.000000000 -0400
@@ -37,6 +37,7 @@
 #include <linux/interrupt.h>
 #include <linux/utsname.h>
 #include <linux/perfctr.h>
+#include <linux/kprobes.h>
 
 #include <asm/pgtable.h>
 #include <asm/uaccess.h>
@@ -310,6 +311,8 @@ void show_regs(struct pt_regs * regs)
 
 void exit_thread(void)
 {
+	kprobe_flush_task(current);
+
 #ifndef CONFIG_SMP
 	if (last_task_used_math == current)
 		last_task_used_math = NULL;
@@ -325,6 +328,7 @@ void flush_thread(void)
 {
 	struct thread_info *t = current_thread_info();
 
+	kprobe_flush_task(current);
 	if (t->flags & _TIF_ABI_PENDING)
 		t->flags ^= (_TIF_ABI_PENDING | _TIF_32BIT);
 
Index: linux-2.6.12-rc6/include/asm-ppc64/kprobes.h
===================================================================
--- linux-2.6.12-rc6.orig/include/asm-ppc64/kprobes.h	2005-06-06 11:22:29.000000000 -0400
+++ linux-2.6.12-rc6/include/asm-ppc64/kprobes.h	2005-06-08 17:22:51.000000000 -0400
@@ -42,6 +42,9 @@ typedef unsigned int kprobe_opcode_t;
 
 #define JPROBE_ENTRY(pentry)	(kprobe_opcode_t *)((func_descr_t *)pentry)
 
+#define ARCH_SUPPORTS_KRETPROBES
+void kretprobe_trampoline(void);
+
 /* Architecture specific copy of original instruction */
 struct arch_specific_insn {
 	/* copy of original instruction */

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]