This is the mail archive of the systemtap@sources.redhat.com mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[RFC] Design + prototype: Multiple handler sets per probe address


Hi,

Here is a design to support "Mulitple handler sets per address". I have also put in the i386 implementation based on this design.

Some notes:

- The interfaces to register, unregister, define handlers all remain
the same.
- A kprobe and jprobe cannot co-exist at the same location. (Ideas are welcome on how to support this).


I have minimally tested the patch and it works(tm).

Please let me know your thoughts on the design. I'd also appreciate if you could test the patch (diffed against 2.6.12-rc1-mm3) and provide feedback.


Thanks, Ananth

Draft of 1 April 2005.

1	Multiple handler sets per kprobe

1.1	Overview

One of the requirements of SystemTAP is the capability of defining 
multiple handler sets per probe address. Current design of kprobes 
does not allow the registration of more than one set of handlers 
(pre, post, fault, break) per address.

This is an attempt to come up with a design that will enable a user
to define multiple handler sets per probe address. 

1.2	Definitions

handler set: "struct kprobe", that defines a set of pre, post, fault
and break handlers.

aggregate kprobe: The higher level structure that will incorporate 
the common fields of the "current" struct kprobe.

1.3	Assumptions

Each handler set registered will be the result of an independent 
register_kprobe() call.


2	Design

2.1	The aggregate kprobe structure

This is a structure formed by abstracting out the fields in the 
current "struct kprobe", that will remain common for all handler 
sets for the particular probe point.

The new aggregate kprobe structure will be:

struct aggr_kp {
	struct hlist_node hlist;
	int jprobe;
	kprobe_opcode_t *addr;
	kprobe_opcode_t opcode;
	struct list_head handlers;
	struct arch_specific_insn ainsn;
};

jprobe is used to determine if the probe at the address is a jprobe.
This is required since we cannot have a jprobe and kprobe handler 
set co-exist at the same address.

2.2	The new struct kprobe

Modules will now just have to define:
	
struct kprobe {
	struct list_head list;
	kprobe_opcode_t *addr;
	kprobe_pre_handler_t pre_handler;
	kprobe_post_handler_t post_handler;
	kprobe_fault_handler_t fault_handler;
	kprobe_break_handler_t break_handler;
};

a. We have kprobe_opcode_t *addr in both sturcts. This is on 
   purpose so that we try and not break the existing interfaces.
b. The "list" element in struct kprobe will now be used to link 
   the handler set with the existing ones (handlers list in 
   struct aggr_kp).
c. Once the "handlers" list is empty, we free struct aggr_kp.

NOTE: It is always a good idea to set the handler fields not being 
used explicitly to NULL. The design determines if a probe is a 
jprobe by virtue of the fact that only jprobes define break_handlers. 

2.3	Base kprobes infrastructure

2.3.1	Registering a handler

1. A new handler set can be registered using the register_kprobe() 
   interface.
2. The kprobe infrastructure determines if a probe already exists at
   the requested location. 
   a. The get_kprobe() function will now return a struct aggr_kp 
      reference a probe is already exists at given address.
3. If a probe exists:
   a. If the call is to register a jprobe, the new handler 
      registration fails since we can't have a jprobe and a kprobe 
      at the same address. Whether a registration is for a jprobe
      or not is determined by the presence of a break_handler.
   b. If not (handler set is for a kprobe), set is added to the 
      "handlers" list.
4. If the probe is new, the kernel allocates a new struct aggr_kp, 
   fills in the relevent fields and adds the handler to the 
   aggr_kp's "handlers" list. In addition, if the "break_handler" 
   of struct kprobe is set, then "jprobe" in struct aggr_kp is 
   set to 1 to indicate that this is a jprobe registration.
 
2.3.2	Unregistering a handler

A call to unregister_kprobe() with the address of the appropriate 
struct kprobe removes the handler set from the corresponding 
aggr_kp's "handlers" list.

If the "handlers" list is empty after deletion, struct aggr_kp is 
freed.

2.4	Arch specific implementations

Listed here are the required changes to each of the architecture 
implementations of kprobes. Some of the changes required, such as,
handler invocation, will be similar across all architectures.

2.4.1	Handler invocation

There are two possible methods of invoking handlers:

2.4.1.1	Serial invocation

All handlers are called in a sequence on every probe hit. It is up 
to the handler to decide what it has to do. 

Refer to section 2.4.2 for a more detailed description of how the 
various handlers behave.

2.4.1.2	Notifier based mechanism

As is done currently with the notifier mechanism for the die_chain, 
we can have a design wherein the handlers are called in sequence 
and the handlers indicate whether or not they handled the probe or 
not. If the handler is done with all processing for the probe and 
does not want any other handler to run, it can just return 
NOTIFY_STOP and no subsequent handler will be called. If the handler
is done and does not have a problem with other handlers running, it 
can return NOTIFY_DONE and subsequent handlers will be called.

Refer to section 2.4.2 for a more detailed description of how the 
various handlers behave.

At the time of writing, serial handler invocation is the preferred 
method.

2.4.2	More points on handler invocation

2.4.2.1	Pre handlers

We care about the return code for a pre_handler _only_ in case of a
jprobe. Return code from all other (read kprobe) handlers are 
ignored. But, it is advised that all pre_handlers return 0 for
compatibility sake with future enhancements.

For a multiple handler set kprobe, all handlers will be called.

2.4.2.2	Post handlers

All post handlers will be called.

2.4.2.3 Fault handlers

Fault handlers are called in sequence _until_ one of them handles
the fault (returns non zero). Once a fault is handled, no other
handlers are called.


3	Caveats

- It is assumed that the handlers written behave well. :-)
- Multiple handlers cannot be registered at a location that already
  has a jprobe registered. In other words, to register a jprobe,
  no other handlers must be registered at that address and 
  vice-versa.
- The current design builds on the existing kprobe locking 
  infrastructure. It is hoped that this design (or a variant)
  can be the base for the scalability changes envisaged. 
  Suggestions are welcome :)
  
  (My earlier thought was to hit two birds with the same arrow,
  but later realized that one bird at a time would be a better
  bet to come up with a good design).


4	Issues


5	Questions

- We currently silently disarm the kprobe if we are recursing 
  and ignore it. Do we have to change that?
- Is the design implementable on all architectures kprobes is 
  currently available?
  Gut feel: YES - the i386 prototype is ready!
diff -Naurp temp/linux-2.6.12-rc1/arch/i386/kernel/kprobes.c linux-2.6.12-rc1/arch/i386/kernel/kprobes.c
--- temp/linux-2.6.12-rc1/arch/i386/kernel/kprobes.c	2005-03-17 20:34:10.000000000 -0500
+++ linux-2.6.12-rc1/arch/i386/kernel/kprobes.c	2005-03-30 19:22:55.000000000 -0500
@@ -37,10 +37,11 @@
 #define KPROBE_HIT_ACTIVE	0x00000001
 #define KPROBE_HIT_SS		0x00000002
 
-static struct kprobe *current_kprobe;
+static struct aggr_kp *current_kprobe;
 static unsigned long kprobe_status, kprobe_old_eflags, kprobe_saved_eflags;
 static struct pt_regs jprobe_saved_regs;
 static long *jprobe_saved_esp;
+
 /* copy of the kernel stack at the probe fire time */
 static kprobe_opcode_t jprobes_stack[MAX_STACK_SIZE];
 void jprobe_return_end(void);
@@ -65,26 +66,27 @@ int arch_prepare_kprobe(struct kprobe *p
 	return 0;
 }
 
-void arch_copy_kprobe(struct kprobe *p)
+void arch_copy_kprobe(struct aggr_kp *kp)
 {
-	memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+	memcpy(kp->ainsn.insn, kp->addr, 
+			MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
 }
 
-void arch_remove_kprobe(struct kprobe *p)
+void arch_remove_kprobe(struct aggr_kp *kp)
 {
 }
 
-static inline void disarm_kprobe(struct kprobe *p, struct pt_regs *regs)
+static inline void disarm_kprobe(struct aggr_kp *kp, struct pt_regs *regs)
 {
-	*p->addr = p->opcode;
-	regs->eip = (unsigned long)p->addr;
+	*kp->addr = kp->opcode;
+	regs->eip = (unsigned long)kp->addr;
 }
 
-static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
+static inline void prepare_singlestep(struct aggr_kp *kp, struct pt_regs *regs)
 {
 	regs->eflags |= TF_MASK;
 	regs->eflags &= ~IF_MASK;
-	regs->eip = (unsigned long)&p->ainsn.insn;
+	regs->eip = (unsigned long)&kp->ainsn.insn;
 }
 
 /*
@@ -93,6 +95,7 @@ static inline void prepare_singlestep(st
  */
 static int kprobe_handler(struct pt_regs *regs)
 {
+	struct aggr_kp *kp;
 	struct kprobe *p;
 	int ret = 0;
 	kprobe_opcode_t *addr = NULL;
@@ -100,7 +103,9 @@ static int kprobe_handler(struct pt_regs
 
 	/* We're in an interrupt, but this is clear and BUG()-safe. */
 	preempt_disable();
-	/* Check if the application is using LDT entry for its code segment and
+
+	/* 
+	 * Check if the application is using LDT entry for its code segment and
 	 * calculate the address by reading the base address from the LDT entry.
 	 */
 	if ((regs->xcs & 4) && (current->mm)) {
@@ -108,21 +113,27 @@ static int kprobe_handler(struct pt_regs
 					+ (char *) current->mm->context.ldt);
 		addr = (kprobe_opcode_t *) (get_desc_base(lp) + regs->eip -
 						sizeof(kprobe_opcode_t));
-	} else {
+	} else
 		addr = (kprobe_opcode_t *)(regs->eip - sizeof(kprobe_opcode_t));
-	}
+
 	/* Check we're not actually recursing */
 	if (kprobe_running()) {
-		/* We *are* holding lock here, so this is safe.
-		   Disarm the probe we just hit, and ignore it. */
-		p = get_kprobe(addr);
-		if (p) {
-			disarm_kprobe(p, regs);
+		/* 
+		 * We *are* holding lock here, so this is safe.
+		 * Disarm the probe we just hit, and ignore it. 
+		 */
+		kp = get_kprobe(addr);
+		if (kp) {
+			disarm_kprobe(kp, regs);
 			ret = 1;
 		} else {
-			p = current_kprobe;
-			if (p->break_handler && p->break_handler(p, regs)) {
-				goto ss_probe;
+			kp = current_kprobe;
+			if (kp->jprobe) {
+				list_for_each_entry(p, &kp->handlers, list) {
+					if (p->break_handler && 
+						p->break_handler(p, regs))
+						goto ss_probe;
+				}
 			}
 		}
 		/* If it's not ours, can't be delete race, (we hold lock). */
@@ -130,8 +141,8 @@ static int kprobe_handler(struct pt_regs
 	}
 
 	lock_kprobes();
-	p = get_kprobe(addr);
-	if (!p) {
+	kp = get_kprobe(addr);
+	if (!kp) {
 		unlock_kprobes();
 		if (regs->eflags & VM_MASK) {
 			/* We are in virtual-8086 mode. Return 0 */
@@ -153,34 +164,46 @@ static int kprobe_handler(struct pt_regs
 	}
 
 	kprobe_status = KPROBE_HIT_ACTIVE;
-	current_kprobe = p;
+	current_kprobe = kp;
 	kprobe_saved_eflags = kprobe_old_eflags
 	    = (regs->eflags & (TF_MASK | IF_MASK));
-	if (is_IF_modifier(p->opcode))
+	if (is_IF_modifier(kp->opcode))
 		kprobe_saved_eflags &= ~IF_MASK;
 
-	if (p->pre_handler(p, regs)) {
-		/* handler has already set things up, so skip ss setup */
-		return 1;
+	/* 
+	 * call each handler one at a time
+	 * NOTE: if this is a jprobe, we have to skip the ss step
+	 */ 
+	if (kp->jprobe) {
+		/* jprobe has just _one_ struct kprobe in the handlers list */
+		list_for_each_entry(p, &kp->handlers, list)
+			if (p->pre_handler && p->pre_handler(p, regs))
+				return 1;
+	}
+
+	list_for_each_entry(p, &kp->handlers, list) {
+		/* we don't care about return values if this isn't a jprobe */
+		if (p->pre_handler) 
+			p->pre_handler(p, regs);
 	}
 
-      ss_probe:
-	prepare_singlestep(p, regs);
+ss_probe:
+	prepare_singlestep(kp, regs);
 	kprobe_status = KPROBE_HIT_SS;
 	return 1;
 
-      no_kprobe:
+no_kprobe:
 	preempt_enable_no_resched();
 	return ret;
 }
 
 /*
- * Called after single-stepping.  p->addr is the address of the
+ * Called after single-stepping.  kp->addr is the address of the
  * instruction whose first byte has been replaced by the "int 3"
  * instruction.  To avoid the SMP problems that can occur when we
  * temporarily put back the original opcode to single-step, we
  * single-stepped a copy of the instruction.  The address of this
- * copy is p->ainsn.insn.
+ * copy is kp->ainsn.insn.
  *
  * This function prepares to return from the post-single-step
  * interrupt.  We have to fix up the stack as follows:
@@ -196,14 +219,14 @@ static int kprobe_handler(struct pt_regs
  * that is atop the stack is the address following the copied instruction.
  * We need to make it the address following the original instruction.
  */
-static void resume_execution(struct kprobe *p, struct pt_regs *regs)
+static void resume_execution(struct aggr_kp *kp, struct pt_regs *regs)
 {
 	unsigned long *tos = (unsigned long *)&regs->esp;
 	unsigned long next_eip = 0;
-	unsigned long copy_eip = (unsigned long)&p->ainsn.insn;
-	unsigned long orig_eip = (unsigned long)p->addr;
+	unsigned long copy_eip = (unsigned long)&kp->ainsn.insn;
+	unsigned long orig_eip = (unsigned long)kp->addr;
 
-	switch (p->ainsn.insn[0]) {
+	switch (kp->ainsn.insn[0]) {
 	case 0x9c:		/* pushfl */
 		*tos &= ~(TF_MASK | IF_MASK);
 		*tos |= kprobe_old_eflags;
@@ -212,13 +235,13 @@ static void resume_execution(struct kpro
 		*tos = orig_eip + (*tos - copy_eip);
 		break;
 	case 0xff:
-		if ((p->ainsn.insn[1] & 0x30) == 0x10) {
+		if ((kp->ainsn.insn[1] & 0x30) == 0x10) {
 			/* call absolute, indirect */
 			/* Fix return addr; eip is correct. */
 			next_eip = regs->eip;
 			*tos = orig_eip + (*tos - copy_eip);
-		} else if (((p->ainsn.insn[1] & 0x31) == 0x20) ||	/* jmp near, absolute indirect */
-			   ((p->ainsn.insn[1] & 0x31) == 0x21)) {	/* jmp far, absolute indirect */
+		} else if (((kp->ainsn.insn[1] & 0x31) == 0x20) ||	/* jmp near, absolute indirect */
+			   ((kp->ainsn.insn[1] & 0x31) == 0x21)) {	/* jmp far, absolute indirect */
 			/* eip is correct. */
 			next_eip = regs->eip;
 		}
@@ -244,11 +267,15 @@ static void resume_execution(struct kpro
  */
 static inline int post_kprobe_handler(struct pt_regs *regs)
 {
+	struct kprobe *p;
+
 	if (!kprobe_running())
 		return 0;
 
-	if (current_kprobe->post_handler)
-		current_kprobe->post_handler(current_kprobe, regs, 0);
+	list_for_each_entry(p, &current_kprobe->handlers, list) {
+		if (p->post_handler)
+			p->post_handler(p, regs, 0);
+	}
 
 	resume_execution(current_kprobe, regs);
 	regs->eflags |= kprobe_saved_eflags;
@@ -270,9 +297,17 @@ static inline int post_kprobe_handler(st
 /* Interrupts disabled, kprobe_lock held. */
 static inline int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
 {
-	if (current_kprobe->fault_handler
-	    && current_kprobe->fault_handler(current_kprobe, regs, trapnr))
-		return 1;
+	struct kprobe *p;
+
+	/* 
+	 * Just return if any of the handlers returned 1 
+	 * 'cos they'd have handled the fault
+	 */
+	list_for_each_entry(p, &current_kprobe->handlers, list) {
+		if (p->fault_handler && 
+			p->fault_handler(p, regs, trapnr))
+			return 1;
+	}
 
 	if (kprobe_status & KPROBE_HIT_SS) {
 		resume_execution(current_kprobe, regs);
@@ -355,7 +390,8 @@ int longjmp_break_handler(struct kprobe 
 	unsigned long stack_addr = (unsigned long)jprobe_saved_esp;
 	struct jprobe *jp = container_of(p, struct jprobe, kp);
 
-	if ((addr > (u8 *) jprobe_return) && (addr < (u8 *) jprobe_return_end)) {
+	if ((addr > (u8 *) jprobe_return) && 
+			(addr < (u8 *) jprobe_return_end)) {
 		if (&regs->esp != jprobe_saved_esp) {
 			struct pt_regs *saved_regs =
 			    container_of(jprobe_saved_esp, struct pt_regs, esp);
@@ -368,8 +404,8 @@ int longjmp_break_handler(struct kprobe 
 			BUG();
 		}
 		*regs = jprobe_saved_regs;
-		memcpy((kprobe_opcode_t *) stack_addr, jprobes_stack,
-		       MIN_STACK_SIZE(stack_addr));
+		memcpy((kprobe_opcode_t *) stack_addr, 
+				jprobes_stack, MIN_STACK_SIZE(stack_addr));
 		return 1;
 	}
 	return 0;
diff -Naurp temp/linux-2.6.12-rc1/arch/ppc64/kernel/kprobes.c linux-2.6.12-rc1/arch/ppc64/kernel/kprobes.c
--- temp/linux-2.6.12-rc1/arch/ppc64/kernel/kprobes.c	2005-03-30 20:22:04.000000000 -0500
+++ linux-2.6.12-rc1/arch/ppc64/kernel/kprobes.c	2005-03-30 17:31:51.000000000 -0500
@@ -128,10 +128,9 @@ static inline int kprobe_handler(struct 
 	kprobe_status = KPROBE_HIT_ACTIVE;
 	current_kprobe = p;
 	kprobe_saved_msr = regs->msr;
-	if (p->pre_handler(p, regs)) {
+	if (p->pre_handler && p->pre_handler(p, regs))
 		/* handler has already set things up, so skip ss setup */
 		return 1;
-	}
 
 ss_probe:
 	prepare_singlestep(p, regs);
diff -Naurp temp/linux-2.6.12-rc1/arch/sparc64/kernel/kprobes.c linux-2.6.12-rc1/arch/sparc64/kernel/kprobes.c
--- temp/linux-2.6.12-rc1/arch/sparc64/kernel/kprobes.c	2005-03-17 20:34:33.000000000 -0500
+++ linux-2.6.12-rc1/arch/sparc64/kernel/kprobes.c	2005-03-29 12:00:29.000000000 -0500
@@ -128,7 +128,7 @@ static int kprobe_handler(struct pt_regs
 
 	kprobe_status = KPROBE_HIT_ACTIVE;
 	current_kprobe = p;
-	if (p->pre_handler(p, regs))
+	if (p->pre_handler && p->pre_handler(p, regs))
 		return 1;
 
 ss_probe:
diff -Naurp temp/linux-2.6.12-rc1/arch/x86_64/kernel/kprobes.c linux-2.6.12-rc1/arch/x86_64/kernel/kprobes.c
--- temp/linux-2.6.12-rc1/arch/x86_64/kernel/kprobes.c	2005-03-30 20:22:04.000000000 -0500
+++ linux-2.6.12-rc1/arch/x86_64/kernel/kprobes.c	2005-03-29 12:00:29.000000000 -0500
@@ -293,17 +293,16 @@ int kprobe_handler(struct pt_regs *regs)
 	if (is_IF_modifier(p->ainsn.insn))
 		kprobe_saved_rflags &= ~IF_MASK;
 
-	if (p->pre_handler(p, regs)) {
+	if (p->pre_handler && p->pre_handler(p, regs))
 		/* handler has already set things up, so skip ss setup */
 		return 1;
-	}
 
-      ss_probe:
+ss_probe:
 	prepare_singlestep(p, regs);
 	kprobe_status = KPROBE_HIT_SS;
 	return 1;
 
-      no_kprobe:
+no_kprobe:
 	preempt_enable_no_resched();
 	return ret;
 }
diff -Naurp temp/linux-2.6.12-rc1/include/linux/kprobes.h linux-2.6.12-rc1/include/linux/kprobes.h
--- temp/linux-2.6.12-rc1/include/linux/kprobes.h	2005-03-17 20:34:07.000000000 -0500
+++ linux-2.6.12-rc1/include/linux/kprobes.h	2005-03-30 19:23:11.000000000 -0500
@@ -41,7 +41,7 @@ typedef void (*kprobe_post_handler_t) (s
 typedef int (*kprobe_fault_handler_t) (struct kprobe *, struct pt_regs *,
 				       int trapnr);
 struct kprobe {
-	struct hlist_node hlist;
+	struct list_head list;
 
 	/* location of the probe point */
 	kprobe_opcode_t *addr;
@@ -59,11 +59,18 @@ struct kprobe {
 	/* ... called if breakpoint trap occurs in probe handler.
 	 * Return 1 if it handled break, otherwise kernel will see it. */
 	kprobe_break_handler_t break_handler;
+};
 
-	/* Saved opcode (which has been replaced with breakpoint) */
+/**
+ * poc for multiple handlers per probe
+ * define a struct aggr_kp abstracting fields from the original kprobe struct
+ */
+struct aggr_kp {
+	struct hlist_node hlist;
+	int jprobe;
+	kprobe_opcode_t *addr;
 	kprobe_opcode_t opcode;
-
-	/* copy of the original instruction */
+	struct list_head handlers;	/* list of struct kprobe handlers */
 	struct arch_specific_insn ainsn;
 };
 
@@ -95,12 +102,12 @@ static inline int kprobe_running(void)
 }
 
 extern int arch_prepare_kprobe(struct kprobe *p);
-extern void arch_copy_kprobe(struct kprobe *p);
-extern void arch_remove_kprobe(struct kprobe *p);
+extern void arch_copy_kprobe(struct aggr_kp *kp);
+extern void arch_remove_kprobe(struct aggr_kp *kp);
 extern void show_registers(struct pt_regs *regs);
 
 /* Get the kprobe at this addr (if any).  Must have called lock_kprobes */
-struct kprobe *get_kprobe(void *addr);
+struct aggr_kp *get_kprobe(void *addr);
 
 int register_kprobe(struct kprobe *p);
 void unregister_kprobe(struct kprobe *p);
diff -Naurp temp/linux-2.6.12-rc1/kernel/kprobes.c linux-2.6.12-rc1/kernel/kprobes.c
--- temp/linux-2.6.12-rc1/kernel/kprobes.c	2005-03-30 20:22:26.000000000 -0500
+++ linux-2.6.12-rc1/kernel/kprobes.c	2005-03-30 19:56:48.000000000 -0500
@@ -59,16 +59,16 @@ void unlock_kprobes(void)
 }
 
 /* You have to be holding the kprobe_lock */
-struct kprobe *get_kprobe(void *addr)
+struct aggr_kp *get_kprobe(void *addr)
 {
 	struct hlist_head *head;
 	struct hlist_node *node;
 
 	head = &kprobe_table[hash_ptr(addr, KPROBE_HASH_BITS)];
 	hlist_for_each(node, head) {
-		struct kprobe *p = hlist_entry(node, struct kprobe, hlist);
-		if (p->addr == addr)
-			return p;
+		struct aggr_kp *kp = hlist_entry(node, struct aggr_kp, hlist);
+		if (kp->addr == addr)
+			return kp;
 	}
 	return NULL;
 }
@@ -77,42 +77,82 @@ int register_kprobe(struct kprobe *p)
 {
 	int ret = 0;
 	unsigned long flags = 0;
+	struct aggr_kp *kp;
 
 	if ((ret = arch_prepare_kprobe(p)) != 0) {
-		goto rm_kprobe;
+		ret = -EINVAL;
+		goto out;
 	}
+	
 	spin_lock_irqsave(&kprobe_lock, flags);
-	INIT_HLIST_NODE(&p->hlist);
-	if (get_kprobe(p->addr)) {
-		ret = -EEXIST;
-		goto out;
+
+	kp = get_kprobe(p->addr);
+	if (kp) {
+		if (kp->jprobe) {
+			/* jprobe already exists at the address */ 
+			ret = -EEXIST;
+			goto free_lock;
+		} else if (p->break_handler) {
+			/* kprobe already exists at the address */
+			ret = -EEXIST;
+			goto free_lock;
+		}
+		/* just add handler to list */
+		list_add(&p->list, &kp->handlers);
+	} else {
+		/* no probe here yet - allocate one and fill up details */
+		kp = kcalloc(1, sizeof(struct aggr_kp), GFP_KERNEL);
+		if (!kp) {
+			ret = -ENOMEM;
+			goto free_lock;
+		}
+
+		INIT_HLIST_NODE(&kp->hlist);
+
+		if (p->break_handler)
+			kp->jprobe = 1;
+
+		kp->addr = p->addr;
+		arch_copy_kprobe(kp);
+		kp->opcode = *kp->addr;
+		*kp->addr = BREAKPOINT_INSTRUCTION;
+
+		INIT_LIST_HEAD(&kp->handlers);
+		list_add(&p->list, &kp->handlers);
+
+		hlist_add_head(&kp->hlist,
+		       &kprobe_table[hash_ptr(kp->addr, KPROBE_HASH_BITS)]);
+		flush_icache_range((unsigned long) kp->addr,
+			   (unsigned long) kp->addr + sizeof(kprobe_opcode_t));
 	}
-	arch_copy_kprobe(p);
 
-	hlist_add_head(&p->hlist,
-		       &kprobe_table[hash_ptr(p->addr, KPROBE_HASH_BITS)]);
+free_lock:
+	spin_unlock_irqrestore(&kprobe_lock, flags);
 
-	p->opcode = *p->addr;
-	*p->addr = BREAKPOINT_INSTRUCTION;
-	flush_icache_range((unsigned long) p->addr,
-			   (unsigned long) p->addr + sizeof(kprobe_opcode_t));
 out:
-	spin_unlock_irqrestore(&kprobe_lock, flags);
-rm_kprobe:
-	if (ret == -EEXIST)
-		arch_remove_kprobe(p);
 	return ret;
 }
 
 void unregister_kprobe(struct kprobe *p)
 {
 	unsigned long flags;
-	arch_remove_kprobe(p);
+	struct aggr_kp *kp;
+
+	kp = get_kprobe(p->addr);
+	if (!kp)
+		return;
+
 	spin_lock_irqsave(&kprobe_lock, flags);
-	*p->addr = p->opcode;
-	hlist_del(&p->hlist);
-	flush_icache_range((unsigned long) p->addr,
-			   (unsigned long) p->addr + sizeof(kprobe_opcode_t));
+	list_del(&p->list);
+	if (list_empty(&kp->handlers)) {
+		/* all handlers unregistered - free aggr_kp */
+		arch_remove_kprobe(kp);
+		*kp->addr = kp->opcode;
+		hlist_del(&kp->hlist);
+		flush_icache_range((unsigned long) kp->addr,
+			   (unsigned long) kp->addr + sizeof(kprobe_opcode_t));
+		kfree(kp);
+	}
 	spin_unlock_irqrestore(&kprobe_lock, flags);
 }
 

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]