This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC -mm][PATCH 6/6] kprobes code for x86 unification


On Mon, Dec 10, 2007 at 05:53:27PM -0500, Masami Hiramatsu wrote:
> This patch unifies kprobes code.
> 
> - Unify kprobes_*.h to kprobes.h
> - Unify kprobes_*.c to kprobes.c
>   (Differences are separated by ifdefs)
>  - Most differences are related to REX prefix and rip relatives.
>  - Two inline assembly code are different.
>  - One difference(suspicious bug?) in kprobe_handlre()
>  - One fixup exception code is different, but I think it can be unified
>    if mm/extable_*.c are unified.
> 
> Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
> ---
>  arch/x86/kernel/Makefile_32  |    2
>  arch/x86/kernel/Makefile_64  |    2
>  arch/x86/kernel/kprobes.c    | 1010 +++++++++++++++++++++++++++++++++++++++++++
>  arch/x86/kernel/kprobes_32.c |  830 -----------------------------------
>  arch/x86/kernel/kprobes_64.c |  941 ----------------------------------------
>  include/asm-x86/kprobes.h    |  101 ++++
>  include/asm-x86/kprobes_32.h |   96 ----
>  include/asm-x86/kprobes_64.h |   96 ----
>  8 files changed, 1108 insertions(+), 1970 deletions(-)

...

While we are at it, can you please run the patch through checkpatch?
It'll probably spew out lots of errors for the opcode tables, but we can
ignore that.

> Index: 2.6.24-rc4-mm1/arch/x86/kernel/kprobes.c
> ===================================================================
> --- /dev/null
> +++ 2.6.24-rc4-mm1/arch/x86/kernel/kprobes.c
> @@ -0,0 +1,1010 @@

...

> +/*
> + * Adjust the displacement if the instruction uses the %rip-relative
> + * addressing mode.
> + * If it does, return the address of the 32-bit displacement word.
> + * If not, return null.
> + */
> +static void __kprobes fix_riprel(struct kprobe *p)
> +{
> +#ifdef CONFIG_X86_64

It'd be good to move this ifdef to the callsite instead; a full function
ifdef'd out for X86_32 just looks a bit odd.

> +	u8 *insn = p->ainsn.insn;
> +	s64 disp;
> +	int need_modrm;
> +
> +	/* Skip legacy instruction prefixes.  */
> +	while (1) {
> +		switch (*insn) {
> +		case 0x66:
> +		case 0x67:
> +		case 0x2e:
> +		case 0x3e:
> +		case 0x26:
> +		case 0x64:
> +		case 0x65:
> +		case 0x36:
> +		case 0xf0:
> +		case 0xf3:
> +		case 0xf2:
> +			++insn;
> +			continue;
> +		}
> +		break;
> +	}
> +
> +	/* Skip REX instruction prefix.  */
> +	if ((*insn & 0xf0) == 0x40)
> +		++insn;
> +
> +	if (*insn == 0x0f) {	/* Two-byte opcode.  */
> +		++insn;
> +		need_modrm = test_bit(*insn, twobyte_has_modrm);
> +	} else {		/* One-byte opcode.  */
> +		need_modrm = test_bit(*insn, onebyte_has_modrm);
> +	}

Braces not needed?

> +
> +	if (need_modrm) {
> +		u8 modrm = *++insn;
> +		if ((modrm & 0xc7) == 0x05) { /* %rip+disp32 addressing mode */
> +			/* Displacement follows ModRM byte.  */
> +			++insn;
> +			/*
> +			 * The copied instruction uses the %rip-relative
> +			 * addressing mode.  Adjust the displacement for the
> +			 * difference between the original location of this
> +			 * instruction and the location of the copy that will
> +			 * actually be run.  The tricky bit here is making sure
> +			 * that the sign extension happens correctly in this
> +			 * calculation, since we need a signed 32-bit result to
> +			 * be sign-extended to 64 bits when it's added to the
> +			 * %rip value and yield the same 64-bit result that the
> +			 * sign-extension of the original signed 32-bit
> +			 * displacement would have given.
> +			 */
> +			disp = (u8 *) p->addr + *((s32 *) insn) - (u8 *) p->ainsn.insn;
> +			BUG_ON((s64) (s32) disp != disp); /* Sanity check.  */
> +			*(s32 *)insn = (s32) disp;
> +		}
> +	}
> +#endif
> +}
> +
> +static void __kprobes arch_copy_kprobe(struct kprobe *p)
> +{
> +	memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
> +	fix_riprel(p);
> +	if (can_boost(p->addr)) {
> +		p->ainsn.boostable = 0;
> +	} else {
> +		p->ainsn.boostable = -1;
> +	}

Here too ^^

> +	p->opcode = *p->addr;
> +}

...

> +		/*
> +		 * In case the user-specified fault handler returned
> +		 * zero, try to fix up.
> +		 */
> +#ifdef CONFIG_X86_64
> +		{
> +			const struct exception_table_entry *fixup;
> +			fixup = search_exception_tables(regs->ip);
> +			if (fixup) {
> +				regs->ip = fixup->fixup;
> +				return 1;
> +			}
> +
> +		}

We don't need the extra level of indentation above, do we?

Ananth


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]