ftrace updates for v6.14:

- Have fprobes built on top of function graph infrastructure
 
   The fprobe logic is an optimized kprobe that uses ftrace to attach to
   functions when a probe is needed at the start or end of the function. The
   fprobe and kretprobe logic implements a similar method as the function
   graph tracer to trace the end of the function. That is to hijack the
   return address and jump to a trampoline to do the trace when the function
   exits. To do this, a shadow stack needs to be created to store the
   original return address.  Fprobes and function graph do this slightly
   differently. Fprobes (and kretprobes) has slots per callsite that are
   reserved to save the return address. This is fine when just a few points
   are traced. But users of fprobes, such as BPF programs, are starting to add
   many more locations, and this method does not scale.
 
   The function graph tracer was created to trace all functions in the
   kernel. In order to do this, when function graph tracing is started, every
   task gets its own shadow stack to hold the return address that is going to
   be traced. The function graph tracer has been updated to allow multiple
   users to use its infrastructure. Now have fprobes be one of those users.
   This will also allow for the fprobe and kretprobe methods to trace the
   return address to become obsolete. With new technologies like CFI that
   need to know about these methods of hijacking the return address, going
   toward a solution that has only one method of doing this will make the
   kernel less complex.
 
 - Cleanup with guard() and free() helpers
 
   There were several places in the code that had a lot of "goto out" in the
   error paths to either unlock a lock or free some memory that was
   allocated. But this is error prone. Convert the code over to use the
   guard() and free() helpers that let the compiler unlock locks or free
   memory when the function exits.
 
 - Remove disabling of interrupts in the function graph tracer
 
   When function graph tracer was first introduced, it could race with
   interrupts and NMIs. To prevent that race, it would disable interrupts and
   not trace NMIs. But the code has changed to allow NMIs and also
   interrupts. This change was done a long time ago, but the disabling of
   interrupts was never removed. Remove the disabling of interrupts in the
   function graph tracer is it is not needed. This greatly improves its
   performance.
 
 - Allow the :mod: command to enable tracing module functions on the kernel
   command line.
 
   The function tracer already has a way to enable functions to be traced in
   modules by writing ":mod:<module>" into set_ftrace_filter. That will
   enable either all the functions for the module if it is loaded, or if it
   is not, it will cache that command, and when the module is loaded that
   matches <module>, its functions will be enabled. This also allows init
   functions to be traced. But currently events do not have that feature.
 
   Because enabling function tracing can be done very early at boot up
   (before scheduling is enabled), the commands that can be done when
   function tracing is started is limited. Having the ":mod:" command to
   trace module functions as they are loaded is very useful. Update the
   kernel command line function filtering to allow it.
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCZ42E2RQccm9zdGVkdEBn
 b29kbWlzLm9yZwAKCRAp5XQQmuv6qqXSAPwOMxuhye8tb1GYG62QD9+w7e6nOmlC
 2GCPj4detnEM2QD/ciivkhespVKhHpZHRewAuSnJgHPSM45NQ3EVESzjWQ4=
 =snbx
 -----END PGP SIGNATURE-----

Merge tag 'ftrace-v6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull ftrace updates from Steven Rostedt:

 - Have fprobes built on top of function graph infrastructure

   The fprobe logic is an optimized kprobe that uses ftrace to attach to
   functions when a probe is needed at the start or end of the function.
   The fprobe and kretprobe logic implements a similar method as the
   function graph tracer to trace the end of the function. That is to
   hijack the return address and jump to a trampoline to do the trace
   when the function exits. To do this, a shadow stack needs to be
   created to store the original return address. Fprobes and function
   graph do this slightly differently. Fprobes (and kretprobes) has
   slots per callsite that are reserved to save the return address. This
   is fine when just a few points are traced. But users of fprobes, such
   as BPF programs, are starting to add many more locations, and this
   method does not scale.

   The function graph tracer was created to trace all functions in the
   kernel. In order to do this, when function graph tracing is started,
   every task gets its own shadow stack to hold the return address that
   is going to be traced. The function graph tracer has been updated to
   allow multiple users to use its infrastructure. Now have fprobes be
   one of those users. This will also allow for the fprobe and kretprobe
   methods to trace the return address to become obsolete. With new
   technologies like CFI that need to know about these methods of
   hijacking the return address, going toward a solution that has only
   one method of doing this will make the kernel less complex.

 - Cleanup with guard() and free() helpers

   There were several places in the code that had a lot of "goto out" in
   the error paths to either unlock a lock or free some memory that was
   allocated. But this is error prone. Convert the code over to use the
   guard() and free() helpers that let the compiler unlock locks or free
   memory when the function exits.

 - Remove disabling of interrupts in the function graph tracer

   When function graph tracer was first introduced, it could race with
   interrupts and NMIs. To prevent that race, it would disable
   interrupts and not trace NMIs. But the code has changed to allow NMIs
   and also interrupts. This change was done a long time ago, but the
   disabling of interrupts was never removed. Remove the disabling of
   interrupts in the function graph tracer is it is not needed. This
   greatly improves its performance.

 - Allow the :mod: command to enable tracing module functions on the
   kernel command line.

   The function tracer already has a way to enable functions to be
   traced in modules by writing ":mod:<module>" into set_ftrace_filter.
   That will enable either all the functions for the module if it is
   loaded, or if it is not, it will cache that command, and when the
   module is loaded that matches <module>, its functions will be
   enabled. This also allows init functions to be traced. But currently
   events do not have that feature.

   Because enabling function tracing can be done very early at boot up
   (before scheduling is enabled), the commands that can be done when
   function tracing is started is limited. Having the ":mod:" command to
   trace module functions as they are loaded is very useful. Update the
   kernel command line function filtering to allow it.

* tag 'ftrace-v6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (26 commits)
  ftrace: Implement :mod: cache filtering on kernel command line
  tracing: Adopt __free() and guard() for trace_fprobe.c
  bpf: Use ftrace_get_symaddr() for kprobe_multi probes
  ftrace: Add ftrace_get_symaddr to convert fentry_ip to symaddr
  Documentation: probes: Update fprobe on function-graph tracer
  selftests/ftrace: Add a test case for repeating register/unregister fprobe
  selftests: ftrace: Remove obsolate maxactive syntax check
  tracing/fprobe: Remove nr_maxactive from fprobe
  fprobe: Add fprobe_header encoding feature
  fprobe: Rewrite fprobe on function-graph tracer
  s390/tracing: Enable HAVE_FTRACE_GRAPH_FUNC
  ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
  bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
  tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
  tracing: Add ftrace_fill_perf_regs() for perf event
  tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs
  fprobe: Use ftrace_regs in fprobe exit handler
  fprobe: Use ftrace_regs in fprobe entry handler
  fgraph: Pass ftrace_regs to retfunc
  fgraph: Replace fgraph_ret_regs with ftrace_regs
  ...
This commit is contained in:
Linus Torvalds 2025-01-21 15:15:28 -08:00
commit 2e04247f7c
57 changed files with 1515 additions and 863 deletions

View File

@ -9,9 +9,10 @@ Fprobe - Function entry/exit probe
Introduction
============
Fprobe is a function entry/exit probe mechanism based on ftrace.
Instead of using ftrace full feature, if you only want to attach callbacks
on function entry and exit, similar to the kprobes and kretprobes, you can
Fprobe is a function entry/exit probe based on the function-graph tracing
feature in ftrace.
Instead of tracing all functions, if you want to attach callbacks on specific
function entry and exit, similar to the kprobes and kretprobes, you can
use fprobe. Compared with kprobes and kretprobes, fprobe gives faster
instrumentation for multiple functions with single handler. This document
describes how to use fprobe.
@ -91,12 +92,14 @@ The prototype of the entry/exit callback function are as follows:
.. code-block:: c
int entry_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct pt_regs *regs, void *entry_data);
int entry_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data);
void exit_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct pt_regs *regs, void *entry_data);
void exit_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data);
Note that the @entry_ip is saved at function entry and passed to exit handler.
If the entry callback function returns !0, the corresponding exit callback will be cancelled.
Note that the @entry_ip is saved at function entry and passed to exit
handler.
If the entry callback function returns !0, the corresponding exit callback
will be cancelled.
@fp
This is the address of `fprobe` data structure related to this handler.
@ -112,12 +115,10 @@ If the entry callback function returns !0, the corresponding exit callback will
This is the return address that the traced function will return to,
somewhere in the caller. This can be used at both entry and exit.
@regs
This is the `pt_regs` data structure at the entry and exit. Note that
the instruction pointer of @regs may be different from the @entry_ip
in the entry_handler. If you need traced instruction pointer, you need
to use @entry_ip. On the other hand, in the exit_handler, the instruction
pointer of @regs is set to the current return address.
@fregs
This is the `ftrace_regs` data structure at the entry and exit. This
includes the function parameters, or the return values. So user can
access thos values via appropriate `ftrace_regs_*` APIs.
@entry_data
This is a local storage to share the data between entry and exit handlers.
@ -125,6 +126,17 @@ If the entry callback function returns !0, the corresponding exit callback will
and `entry_data_size` field when registering the fprobe, the storage is
allocated and passed to both `entry_handler` and `exit_handler`.
Entry data size and exit handlers on the same function
======================================================
Since the entry data is passed via per-task stack and it has limited size,
the entry data size per probe is limited to `15 * sizeof(long)`. You also need
to take care that the different fprobes are probing on the same function, this
limit becomes smaller. The entry data size is aligned to `sizeof(long)` and
each fprobe which has exit handler uses a `sizeof(long)` space on the stack,
you should keep the number of fprobes on the same function as small as
possible.
Share the callbacks with kprobes
================================
@ -165,8 +177,8 @@ This counter counts up when;
- fprobe fails to take ftrace_recursion lock. This usually means that a function
which is traced by other ftrace users is called from the entry_handler.
- fprobe fails to setup the function exit because of the shortage of rethook
(the shadow stack for hooking the function return.)
- fprobe fails to setup the function exit because of failing to allocate the
data buffer from the per-task shadow stack.
The `fprobe::nmissed` field counts up in both cases. Therefore, the former
skips both of entry and exit callback and the latter skips the exit

View File

@ -217,9 +217,11 @@ config ARM64
select HAVE_SAMPLE_FTRACE_DIRECT_MULTI
select HAVE_EFFICIENT_UNALIGNED_ACCESS
select HAVE_GUP_FAST
select HAVE_FTRACE_GRAPH_FUNC
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_ERROR_INJECTION
select HAVE_FUNCTION_GRAPH_FREGS
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_RETVAL
select HAVE_GCC_PLUGINS

View File

@ -8,6 +8,7 @@ syscall-y += unistd_32.h
syscall-y += unistd_compat_32.h
generic-y += early_ioremap.h
generic-y += fprobe.h
generic-y += mcs_spinlock.h
generic-y += mmzone.h
generic-y += qrwlock.h

View File

@ -52,6 +52,8 @@ extern unsigned long ftrace_graph_call;
extern void return_to_handler(void);
unsigned long ftrace_call_adjust(unsigned long addr);
unsigned long arch_ftrace_get_symaddr(unsigned long fentry_ip);
#define ftrace_get_symaddr(fentry_ip) arch_ftrace_get_symaddr(fentry_ip)
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
#define HAVE_ARCH_FTRACE_REGS
@ -129,6 +131,38 @@ ftrace_override_function_with_return(struct ftrace_regs *fregs)
arch_ftrace_regs(fregs)->pc = arch_ftrace_regs(fregs)->lr;
}
static __always_inline unsigned long
ftrace_regs_get_frame_pointer(const struct ftrace_regs *fregs)
{
return arch_ftrace_regs(fregs)->fp;
}
static __always_inline unsigned long
ftrace_regs_get_return_address(const struct ftrace_regs *fregs)
{
return arch_ftrace_regs(fregs)->lr;
}
static __always_inline struct pt_regs *
ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs)
{
struct __arch_ftrace_regs *afregs = arch_ftrace_regs(fregs);
memcpy(regs->regs, afregs->regs, sizeof(afregs->regs));
regs->sp = afregs->sp;
regs->pc = afregs->pc;
regs->regs[29] = afregs->fp;
regs->regs[30] = afregs->lr;
return regs;
}
#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \
(_regs)->pc = arch_ftrace_regs(fregs)->pc; \
(_regs)->regs[29] = arch_ftrace_regs(fregs)->fp; \
(_regs)->sp = arch_ftrace_regs(fregs)->sp; \
(_regs)->pstate = PSR_MODE_EL1h; \
} while (0)
int ftrace_regs_query_register_offset(const char *name);
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
@ -186,23 +220,6 @@ static inline bool arch_syscall_match_sym_name(const char *sym,
#ifndef __ASSEMBLY__
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
struct fgraph_ret_regs {
/* x0 - x7 */
unsigned long regs[8];
unsigned long fp;
unsigned long __unused;
};
static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->regs[0];
}
static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->fp;
}
void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,
unsigned long frame_pointer);

View File

@ -179,18 +179,6 @@ int main(void)
DEFINE(FTRACE_OPS_FUNC, offsetof(struct ftrace_ops, func));
#endif
BLANK();
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
DEFINE(FGRET_REGS_X0, offsetof(struct fgraph_ret_regs, regs[0]));
DEFINE(FGRET_REGS_X1, offsetof(struct fgraph_ret_regs, regs[1]));
DEFINE(FGRET_REGS_X2, offsetof(struct fgraph_ret_regs, regs[2]));
DEFINE(FGRET_REGS_X3, offsetof(struct fgraph_ret_regs, regs[3]));
DEFINE(FGRET_REGS_X4, offsetof(struct fgraph_ret_regs, regs[4]));
DEFINE(FGRET_REGS_X5, offsetof(struct fgraph_ret_regs, regs[5]));
DEFINE(FGRET_REGS_X6, offsetof(struct fgraph_ret_regs, regs[6]));
DEFINE(FGRET_REGS_X7, offsetof(struct fgraph_ret_regs, regs[7]));
DEFINE(FGRET_REGS_FP, offsetof(struct fgraph_ret_regs, fp));
DEFINE(FGRET_REGS_SIZE, sizeof(struct fgraph_ret_regs));
#endif
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
DEFINE(FTRACE_OPS_DIRECT_CALL, offsetof(struct ftrace_ops, direct_call));
#endif

View File

@ -329,24 +329,28 @@ SYM_FUNC_END(ftrace_stub_graph)
* @fp is checked against the value passed by ftrace_graph_caller().
*/
SYM_CODE_START(return_to_handler)
/* save return value regs */
sub sp, sp, #FGRET_REGS_SIZE
stp x0, x1, [sp, #FGRET_REGS_X0]
stp x2, x3, [sp, #FGRET_REGS_X2]
stp x4, x5, [sp, #FGRET_REGS_X4]
stp x6, x7, [sp, #FGRET_REGS_X6]
str x29, [sp, #FGRET_REGS_FP] // parent's fp
/* Make room for ftrace_regs */
sub sp, sp, #FREGS_SIZE
/* Save return value regs */
stp x0, x1, [sp, #FREGS_X0]
stp x2, x3, [sp, #FREGS_X2]
stp x4, x5, [sp, #FREGS_X4]
stp x6, x7, [sp, #FREGS_X6]
/* Save the callsite's FP */
str x29, [sp, #FREGS_FP]
mov x0, sp
bl ftrace_return_to_handler // addr = ftrace_return_to_hander(regs);
bl ftrace_return_to_handler // addr = ftrace_return_to_hander(fregs);
mov x30, x0 // restore the original return address
/* restore return value regs */
ldp x0, x1, [sp, #FGRET_REGS_X0]
ldp x2, x3, [sp, #FGRET_REGS_X2]
ldp x4, x5, [sp, #FGRET_REGS_X4]
ldp x6, x7, [sp, #FGRET_REGS_X6]
add sp, sp, #FGRET_REGS_SIZE
/* Restore return value regs */
ldp x0, x1, [sp, #FREGS_X0]
ldp x2, x3, [sp, #FREGS_X2]
ldp x4, x5, [sp, #FREGS_X4]
ldp x6, x7, [sp, #FREGS_X6]
add sp, sp, #FREGS_SIZE
ret
SYM_CODE_END(return_to_handler)

View File

@ -143,6 +143,69 @@ unsigned long ftrace_call_adjust(unsigned long addr)
return addr;
}
/* Convert fentry_ip to the symbol address without kallsyms */
unsigned long arch_ftrace_get_symaddr(unsigned long fentry_ip)
{
u32 insn;
/*
* When using patchable-function-entry without pre-function NOPS, ftrace
* entry is the address of the first NOP after the function entry point.
*
* The compiler has either generated:
*
* func+00: func: NOP // To be patched to MOV X9, LR
* func+04: NOP // To be patched to BL <caller>
*
* Or:
*
* func-04: BTI C
* func+00: func: NOP // To be patched to MOV X9, LR
* func+04: NOP // To be patched to BL <caller>
*
* The fentry_ip is the address of `BL <caller>` which is at `func + 4`
* bytes in either case.
*/
if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS))
return fentry_ip - AARCH64_INSN_SIZE;
/*
* When using patchable-function-entry with pre-function NOPs, BTI is
* a bit different.
*
* func+00: func: NOP // To be patched to MOV X9, LR
* func+04: NOP // To be patched to BL <caller>
*
* Or:
*
* func+00: func: BTI C
* func+04: NOP // To be patched to MOV X9, LR
* func+08: NOP // To be patched to BL <caller>
*
* The fentry_ip is the address of `BL <caller>` which is at either
* `func + 4` or `func + 8` depends on whether there is a BTI.
*/
/* If there is no BTI, the func address should be one instruction before. */
if (!IS_ENABLED(CONFIG_ARM64_BTI_KERNEL))
return fentry_ip - AARCH64_INSN_SIZE;
/* We want to be extra safe in case entry ip is on the page edge,
* but otherwise we need to avoid get_kernel_nofault()'s overhead.
*/
if ((fentry_ip & ~PAGE_MASK) < AARCH64_INSN_SIZE * 2) {
if (get_kernel_nofault(insn, (u32 *)(fentry_ip - AARCH64_INSN_SIZE * 2)))
return 0;
} else {
insn = *(u32 *)(fentry_ip - AARCH64_INSN_SIZE * 2);
}
if (aarch64_insn_is_bti(le32_to_cpu((__le32)insn)))
return fentry_ip - AARCH64_INSN_SIZE * 2;
return fentry_ip - AARCH64_INSN_SIZE;
}
/*
* Replace a single instruction, which may be a branch or NOP.
* If @validate == true, a replaced instruction is checked against 'old'.
@ -481,7 +544,20 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
prepare_ftrace_return(ip, &arch_ftrace_regs(fregs)->lr, arch_ftrace_regs(fregs)->fp);
unsigned long return_hooker = (unsigned long)&return_to_handler;
unsigned long frame_pointer = arch_ftrace_regs(fregs)->fp;
unsigned long *parent = &arch_ftrace_regs(fregs)->lr;
unsigned long old;
if (unlikely(atomic_read(&current->tracing_graph_pause)))
return;
old = *parent;
if (!function_graph_enter_regs(old, ip, frame_pointer,
(void *)frame_pointer, fregs)) {
*parent = return_hooker;
}
}
#else
/*

View File

@ -129,16 +129,18 @@ config LOONGARCH
select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_ARGS
select HAVE_FTRACE_REGS_HAVING_PT_REGS
select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
select HAVE_DYNAMIC_FTRACE_WITH_REGS
select HAVE_EBPF_JIT
select HAVE_EFFICIENT_UNALIGNED_ACCESS if !ARCH_STRICT_ALIGN
select HAVE_EXIT_THREAD
select HAVE_GUP_FAST
select HAVE_FTRACE_GRAPH_FUNC
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_ARG_ACCESS_API
select HAVE_FUNCTION_ERROR_INJECTION
select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_FREGS
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS

View File

@ -0,0 +1,12 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_LOONGARCH_FPROBE_H
#define _ASM_LOONGARCH_FPROBE_H
/*
* Explicitly undef ARCH_DEFINE_ENCODE_FPROBE_HEADER, because loongarch does not
* have enough number of fixed MSBs of the address of kernel objects for
* encoding the size of data in fprobe_header. Use 2-entries encoding instead.
*/
#undef ARCH_DEFINE_ENCODE_FPROBE_HEADER
#endif /* _ASM_LOONGARCH_FPROBE_H */

View File

@ -57,6 +57,16 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, unsigned long ip)
instruction_pointer_set(&arch_ftrace_regs(fregs)->regs, ip);
}
#undef ftrace_regs_get_frame_pointer
#define ftrace_regs_get_frame_pointer(fregs) \
(arch_ftrace_regs(fregs)->regs.regs[22])
static __always_inline unsigned long
ftrace_regs_get_return_address(struct ftrace_regs *fregs)
{
return *(unsigned long *)(arch_ftrace_regs(fregs)->regs.regs[1]);
}
#define ftrace_graph_func ftrace_graph_func
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs);
@ -78,26 +88,4 @@ __arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr)
#endif /* CONFIG_FUNCTION_TRACER */
#ifndef __ASSEMBLY__
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
struct fgraph_ret_regs {
/* a0 - a1 */
unsigned long regs[2];
unsigned long fp;
unsigned long __unused;
};
static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->regs[0];
}
static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->fp;
}
#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */
#endif
#endif /* _ASM_LOONGARCH_FTRACE_H */

View File

@ -280,18 +280,6 @@ static void __used output_pbe_defines(void)
}
#endif
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
static void __used output_fgraph_ret_regs_defines(void)
{
COMMENT("LoongArch fgraph_ret_regs offsets.");
OFFSET(FGRET_REGS_A0, fgraph_ret_regs, regs[0]);
OFFSET(FGRET_REGS_A1, fgraph_ret_regs, regs[1]);
OFFSET(FGRET_REGS_FP, fgraph_ret_regs, fp);
DEFINE(FGRET_REGS_SIZE, sizeof(struct fgraph_ret_regs));
BLANK();
}
#endif
static void __used output_kvm_defines(void)
{
COMMENT("KVM/LoongArch Specific offsets.");

View File

@ -243,8 +243,16 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
{
struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs;
unsigned long *parent = (unsigned long *)&regs->regs[1];
unsigned long return_hooker = (unsigned long)&return_to_handler;
unsigned long old;
prepare_ftrace_return(ip, (unsigned long *)parent);
if (unlikely(atomic_read(&current->tracing_graph_pause)))
return;
old = *parent;
if (!function_graph_enter_regs(old, ip, 0, parent, fregs))
*parent = return_hooker;
}
#else
static int ftrace_modify_graph_caller(bool enable)

View File

@ -79,10 +79,11 @@ SYM_FUNC_START(ftrace_graph_caller)
SYM_FUNC_END(ftrace_graph_caller)
SYM_FUNC_START(return_to_handler)
PTR_ADDI sp, sp, -FGRET_REGS_SIZE
PTR_S a0, sp, FGRET_REGS_A0
PTR_S a1, sp, FGRET_REGS_A1
PTR_S zero, sp, FGRET_REGS_FP
/* Save return value regs */
PTR_ADDI sp, sp, -PT_SIZE
PTR_S a0, sp, PT_R4
PTR_S a1, sp, PT_R5
PTR_S zero, sp, PT_R22
move a0, sp
bl ftrace_return_to_handler
@ -90,9 +91,11 @@ SYM_FUNC_START(return_to_handler)
/* Restore the real parent address: a0 -> ra */
move ra, a0
PTR_L a0, sp, FGRET_REGS_A0
PTR_L a1, sp, FGRET_REGS_A1
PTR_ADDI sp, sp, FGRET_REGS_SIZE
/* Restore return value regs */
PTR_L a0, sp, PT_R4
PTR_L a1, sp, PT_R5
PTR_ADDI sp, sp, PT_SIZE
jr ra
SYM_FUNC_END(return_to_handler)
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */

View File

@ -140,19 +140,19 @@ SYM_CODE_END(ftrace_graph_caller)
SYM_CODE_START(return_to_handler)
UNWIND_HINT_UNDEFINED
/* Save return value regs */
PTR_ADDI sp, sp, -FGRET_REGS_SIZE
PTR_S a0, sp, FGRET_REGS_A0
PTR_S a1, sp, FGRET_REGS_A1
PTR_S zero, sp, FGRET_REGS_FP
PTR_ADDI sp, sp, -PT_SIZE
PTR_S a0, sp, PT_R4
PTR_S a1, sp, PT_R5
PTR_S zero, sp, PT_R22
move a0, sp
bl ftrace_return_to_handler
move ra, a0
/* Restore return value regs */
PTR_L a0, sp, FGRET_REGS_A0
PTR_L a1, sp, FGRET_REGS_A1
PTR_ADDI sp, sp, FGRET_REGS_SIZE
PTR_L a0, sp, PT_R4
PTR_L a1, sp, PT_R5
PTR_ADDI sp, sp, PT_SIZE
jr ra
SYM_CODE_END(return_to_handler)

View File

@ -241,6 +241,7 @@ config PPC
select HAVE_EBPF_JIT
select HAVE_EFFICIENT_UNALIGNED_ACCESS
select HAVE_GUP_FAST
select HAVE_FTRACE_GRAPH_FUNC
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_ARG_ACCESS_API
select HAVE_FUNCTION_DESCRIPTORS if PPC64_ELF_ABI_V1

View File

@ -43,6 +43,13 @@ static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *
return arch_ftrace_regs(fregs)->regs.msr ? &arch_ftrace_regs(fregs)->regs : NULL;
}
#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \
(_regs)->result = 0; \
(_regs)->nip = arch_ftrace_regs(fregs)->regs.nip; \
(_regs)->gpr[1] = arch_ftrace_regs(fregs)->regs.gpr[1]; \
asm volatile("mfmsr %0" : "=r" ((_regs)->msr)); \
} while (0)
static __always_inline void
ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
unsigned long ip)
@ -50,6 +57,12 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
regs_set_return_ip(&arch_ftrace_regs(fregs)->regs, ip);
}
static __always_inline unsigned long
ftrace_regs_get_return_address(struct ftrace_regs *fregs)
{
return arch_ftrace_regs(fregs)->regs.link;
}
struct ftrace_ops;
#define ftrace_graph_func ftrace_graph_func

View File

@ -658,7 +658,6 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
unsigned long sp = arch_ftrace_regs(fregs)->regs.gpr[1];
int bit;
if (unlikely(ftrace_graph_is_dead()))
goto out;
@ -666,14 +665,9 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
if (unlikely(atomic_read(&current->tracing_graph_pause)))
goto out;
bit = ftrace_test_recursion_trylock(ip, parent_ip);
if (bit < 0)
goto out;
if (!function_graph_enter(parent_ip, ip, 0, (unsigned long *)sp))
if (!function_graph_enter_regs(parent_ip, ip, 0, (unsigned long *)sp, fregs))
parent_ip = ppc_function_entry(return_to_handler);
ftrace_test_recursion_unlock(bit);
out:
arch_ftrace_regs(fregs)->regs.link = parent_ip;
}

View File

@ -787,10 +787,10 @@ int ftrace_disable_ftrace_graph_caller(void)
* in current thread info. Return the address we want to divert to.
*/
static unsigned long
__prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp)
__prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp,
struct ftrace_regs *fregs)
{
unsigned long return_hooker;
int bit;
if (unlikely(ftrace_graph_is_dead()))
goto out;
@ -798,16 +798,11 @@ __prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp
if (unlikely(atomic_read(&current->tracing_graph_pause)))
goto out;
bit = ftrace_test_recursion_trylock(ip, parent);
if (bit < 0)
goto out;
return_hooker = ppc_function_entry(return_to_handler);
if (!function_graph_enter(parent, ip, 0, (unsigned long *)sp))
if (!function_graph_enter_regs(parent, ip, 0, (unsigned long *)sp, fregs))
parent = return_hooker;
ftrace_test_recursion_unlock(bit);
out:
return parent;
}
@ -816,13 +811,14 @@ out:
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
arch_ftrace_regs(fregs)->regs.link = __prepare_ftrace_return(parent_ip, ip, arch_ftrace_regs(fregs)->regs.gpr[1]);
arch_ftrace_regs(fregs)->regs.link = __prepare_ftrace_return(parent_ip, ip,
arch_ftrace_regs(fregs)->regs.gpr[1], fregs);
}
#else
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip,
unsigned long sp)
{
return __prepare_ftrace_return(parent, ip, sp);
return __prepare_ftrace_return(parent, ip, sp, NULL);
}
#endif
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */

View File

@ -146,9 +146,10 @@ config RISCV
select HAVE_DYNAMIC_FTRACE if !XIP_KERNEL && MMU && (CLANG_SUPPORTS_DYNAMIC_FTRACE || GCC_SUPPORTS_DYNAMIC_FTRACE)
select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
select HAVE_DYNAMIC_FTRACE_WITH_ARGS if HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_GRAPH_FUNC
select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_FREGS
select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION
select HAVE_EBPF_JIT if MMU
select HAVE_GUP_FAST if MMU

View File

@ -4,6 +4,7 @@ syscall-y += syscall_table_64.h
generic-y += early_ioremap.h
generic-y += flat.h
generic-y += fprobe.h
generic-y += kvm_para.h
generic-y += mmzone.h
generic-y += mcs_spinlock.h

View File

@ -168,6 +168,11 @@ static __always_inline unsigned long ftrace_regs_get_stack_pointer(const struct
return arch_ftrace_regs(fregs)->sp;
}
static __always_inline unsigned long ftrace_regs_get_frame_pointer(const struct ftrace_regs *fregs)
{
return arch_ftrace_regs(fregs)->s0;
}
static __always_inline unsigned long ftrace_regs_get_argument(struct ftrace_regs *fregs,
unsigned int n)
{
@ -181,6 +186,11 @@ static __always_inline unsigned long ftrace_regs_get_return_value(const struct f
return arch_ftrace_regs(fregs)->a0;
}
static __always_inline unsigned long ftrace_regs_get_return_address(const struct ftrace_regs *fregs)
{
return arch_ftrace_regs(fregs)->ra;
}
static __always_inline void ftrace_regs_set_return_value(struct ftrace_regs *fregs,
unsigned long ret)
{
@ -192,6 +202,20 @@ static __always_inline void ftrace_override_function_with_return(struct ftrace_r
arch_ftrace_regs(fregs)->epc = arch_ftrace_regs(fregs)->ra;
}
static __always_inline struct pt_regs *
ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs)
{
struct __arch_ftrace_regs *afregs = arch_ftrace_regs(fregs);
memcpy(&regs->a0, afregs->args, sizeof(afregs->args));
regs->epc = afregs->epc;
regs->ra = afregs->ra;
regs->sp = afregs->sp;
regs->s0 = afregs->s0;
regs->t1 = afregs->t1;
return regs;
}
int ftrace_regs_query_register_offset(const char *name);
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
@ -208,25 +232,4 @@ static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs, unsi
#endif /* CONFIG_DYNAMIC_FTRACE */
#ifndef __ASSEMBLY__
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
struct fgraph_ret_regs {
unsigned long a1;
unsigned long a0;
unsigned long s0;
unsigned long ra;
};
static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->a0;
}
static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->s0;
}
#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */
#endif
#endif /* _ASM_RISCV_FTRACE_H */

View File

@ -214,7 +214,22 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
prepare_ftrace_return(&arch_ftrace_regs(fregs)->ra, ip, arch_ftrace_regs(fregs)->s0);
unsigned long return_hooker = (unsigned long)&return_to_handler;
unsigned long frame_pointer = arch_ftrace_regs(fregs)->s0;
unsigned long *parent = &arch_ftrace_regs(fregs)->ra;
unsigned long old;
if (unlikely(atomic_read(&current->tracing_graph_pause)))
return;
/*
* We don't suffer access faults, so no extra fault-recovery assembly
* is needed here.
*/
old = *parent;
if (!function_graph_enter_regs(old, ip, frame_pointer, parent, fregs))
*parent = return_hooker;
}
#else /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */
extern void ftrace_graph_call(void);

View File

@ -12,6 +12,8 @@
#include <asm/asm-offsets.h>
#include <asm/ftrace.h>
#define ABI_SIZE_ON_STACK 80
.text
.macro SAVE_ABI_STATE
@ -26,12 +28,12 @@
* register if a0 was not saved.
*/
.macro SAVE_RET_ABI_STATE
addi sp, sp, -4*SZREG
REG_S s0, 2*SZREG(sp)
REG_S ra, 3*SZREG(sp)
REG_S a0, 1*SZREG(sp)
REG_S a1, 0*SZREG(sp)
addi s0, sp, 4*SZREG
addi sp, sp, -ABI_SIZE_ON_STACK
REG_S ra, 1*SZREG(sp)
REG_S s0, 8*SZREG(sp)
REG_S a0, 10*SZREG(sp)
REG_S a1, 11*SZREG(sp)
addi s0, sp, ABI_SIZE_ON_STACK
.endm
.macro RESTORE_ABI_STATE
@ -41,11 +43,11 @@
.endm
.macro RESTORE_RET_ABI_STATE
REG_L ra, 3*SZREG(sp)
REG_L s0, 2*SZREG(sp)
REG_L a0, 1*SZREG(sp)
REG_L a1, 0*SZREG(sp)
addi sp, sp, 4*SZREG
REG_L ra, 1*SZREG(sp)
REG_L s0, 8*SZREG(sp)
REG_L a0, 10*SZREG(sp)
REG_L a1, 11*SZREG(sp)
addi sp, sp, ABI_SIZE_ON_STACK
.endm
SYM_TYPED_FUNC_START(ftrace_stub)

View File

@ -183,16 +183,18 @@ config S390
select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_ARGS
select HAVE_FTRACE_REGS_HAVING_PT_REGS
select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
select HAVE_DYNAMIC_FTRACE_WITH_REGS
select HAVE_EBPF_JIT if HAVE_MARCH_Z196_FEATURES
select HAVE_EFFICIENT_UNALIGNED_ACCESS
select HAVE_GUP_FAST
select HAVE_FENTRY
select HAVE_FTRACE_GRAPH_FUNC
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_ARG_ACCESS_API
select HAVE_FUNCTION_ERROR_INJECTION
select HAVE_FUNCTION_GRAPH_RETVAL
select HAVE_FUNCTION_GRAPH_FREGS
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS

View File

@ -0,0 +1,10 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_S390_FPROBE_H
#define _ASM_S390_FPROBE_H
#include <asm-generic/fprobe.h>
#undef FPROBE_HEADER_MSB_PATTERN
#define FPROBE_HEADER_MSB_PATTERN 0
#endif /* _ASM_S390_FPROBE_H */

View File

@ -39,6 +39,7 @@ struct dyn_arch_ftrace { };
struct module;
struct dyn_ftrace;
struct ftrace_ops;
bool ftrace_need_init_nop(void);
#define ftrace_need_init_nop ftrace_need_init_nop
@ -62,23 +63,6 @@ static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *
return NULL;
}
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
struct fgraph_ret_regs {
unsigned long gpr2;
unsigned long fp;
};
static __always_inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->gpr2;
}
static __always_inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->fp;
}
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
static __always_inline void
ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
unsigned long ip)
@ -86,6 +70,25 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
arch_ftrace_regs(fregs)->regs.psw.addr = ip;
}
#undef ftrace_regs_get_frame_pointer
static __always_inline unsigned long
ftrace_regs_get_frame_pointer(struct ftrace_regs *fregs)
{
return ftrace_regs_get_stack_pointer(fregs);
}
static __always_inline unsigned long
ftrace_regs_get_return_address(const struct ftrace_regs *fregs)
{
return arch_ftrace_regs(fregs)->regs.gprs[14];
}
#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \
(_regs)->psw.mask = 0; \
(_regs)->psw.addr = arch_ftrace_regs(fregs)->regs.psw.addr; \
(_regs)->gprs[15] = arch_ftrace_regs(fregs)->regs.gprs[15]; \
} while (0)
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
/*
* When an ftrace registered caller is tracing a function that is
@ -126,6 +129,10 @@ static inline bool arch_syscall_match_sym_name(const char *sym,
return !strcmp(sym + 7, name) || !strcmp(sym + 8, name);
}
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs);
#define ftrace_graph_func ftrace_graph_func
#endif /* __ASSEMBLY__ */
#ifdef CONFIG_FUNCTION_TRACER

View File

@ -175,12 +175,6 @@ int main(void)
DEFINE(OLDMEM_SIZE, PARMAREA + offsetof(struct parmarea, oldmem_size));
DEFINE(COMMAND_LINE, PARMAREA + offsetof(struct parmarea, command_line));
DEFINE(MAX_COMMAND_LINE_SIZE, PARMAREA + offsetof(struct parmarea, max_command_line_size));
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/* function graph return value tracing */
OFFSET(__FGRAPH_RET_GPR2, fgraph_ret_regs, gpr2);
OFFSET(__FGRAPH_RET_FP, fgraph_ret_regs, fp);
DEFINE(__FGRAPH_RET_SIZE, sizeof(struct fgraph_ret_regs));
#endif
OFFSET(__FTRACE_REGS_PT_REGS, __arch_ftrace_regs, regs);
DEFINE(__FTRACE_REGS_SIZE, sizeof(struct __arch_ftrace_regs));

View File

@ -41,7 +41,6 @@ void do_restart(void *arg);
void __init startup_init(void);
void die(struct pt_regs *regs, const char *str);
int setup_profiling_timer(unsigned int multiplier);
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long sp, unsigned long ip);
struct s390_mmap_arg_struct;
struct fadvise64_64_args;

View File

@ -261,43 +261,23 @@ void ftrace_arch_code_modify_post_process(void)
}
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
* Hook the return address and push it in the stack of return addresses
* in current thread info.
*/
unsigned long prepare_ftrace_return(unsigned long ra, unsigned long sp,
unsigned long ip)
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
unsigned long *parent = &arch_ftrace_regs(fregs)->regs.gprs[14];
int bit;
if (unlikely(ftrace_graph_is_dead()))
goto out;
return;
if (unlikely(atomic_read(&current->tracing_graph_pause)))
goto out;
ip -= MCOUNT_INSN_SIZE;
if (!function_graph_enter(ra, ip, 0, (void *) sp))
ra = (unsigned long) return_to_handler;
out:
return ra;
}
NOKPROBE_SYMBOL(prepare_ftrace_return);
/*
* Patch the kernel code at ftrace_graph_caller location. The instruction
* there is branch relative on condition. To enable the ftrace graph code
* block, we simply patch the mask field of the instruction to zero and
* turn the instruction into a nop.
* To disable the ftrace graph code the mask field will be patched to
* all ones, which turns the instruction into an unconditional branch.
*/
int ftrace_enable_ftrace_graph_caller(void)
{
/* Expect brc 0xf,... */
return ftrace_patch_branch_mask(ftrace_graph_caller, 0xa7f4, false);
}
int ftrace_disable_ftrace_graph_caller(void)
{
/* Expect brc 0x0,... */
return ftrace_patch_branch_mask(ftrace_graph_caller, 0xa704, true);
return;
bit = ftrace_test_recursion_trylock(ip, *parent);
if (bit < 0)
return;
if (!function_graph_enter_regs(*parent, ip, 0, parent, fregs))
*parent = (unsigned long)&return_to_handler;
ftrace_test_recursion_unlock(bit);
}
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */

View File

@ -104,17 +104,6 @@ SYM_CODE_START(ftrace_common)
lgr %r3,%r14
la %r5,STACK_FREGS(%r15)
BASR_EX %r14,%r1
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
# The j instruction gets runtime patched to a nop instruction.
# See ftrace_enable_ftrace_graph_caller.
SYM_INNER_LABEL(ftrace_graph_caller, SYM_L_GLOBAL)
j .Lftrace_graph_caller_end
lmg %r2,%r3,(STACK_FREGS_PTREGS_GPRS+14*8)(%r15)
lg %r4,(STACK_FREGS_PTREGS_PSW+8)(%r15)
brasl %r14,prepare_ftrace_return
stg %r2,(STACK_FREGS_PTREGS_GPRS+14*8)(%r15)
.Lftrace_graph_caller_end:
#endif
lg %r0,(STACK_FREGS_PTREGS_PSW+8)(%r15)
#ifdef MARCH_HAS_Z196_FEATURES
ltg %r1,STACK_FREGS_PTREGS_ORIG_GPR2(%r15)
@ -134,14 +123,14 @@ SYM_CODE_END(ftrace_common)
SYM_FUNC_START(return_to_handler)
stmg %r2,%r5,32(%r15)
lgr %r1,%r15
aghi %r15,-(STACK_FRAME_OVERHEAD+__FGRAPH_RET_SIZE)
# allocate ftrace_regs and stack frame for ftrace_return_to_handler
aghi %r15,-STACK_FRAME_SIZE_FREGS
stg %r1,__SF_BACKCHAIN(%r15)
la %r3,STACK_FRAME_OVERHEAD(%r15)
stg %r1,__FGRAPH_RET_FP(%r3)
stg %r2,__FGRAPH_RET_GPR2(%r3)
lgr %r2,%r3
stg %r2,(STACK_FREGS_PTREGS_GPRS+2*8)(%r15)
stg %r1,(STACK_FREGS_PTREGS_GPRS+15*8)(%r15)
la %r2,STACK_FRAME_OVERHEAD(%r15)
brasl %r14,ftrace_return_to_handler
aghi %r15,STACK_FRAME_OVERHEAD+__FGRAPH_RET_SIZE
aghi %r15,STACK_FRAME_SIZE_FREGS
lgr %r14,%r2
lmg %r2,%r5,32(%r15)
BR_EX %r14

View File

@ -224,6 +224,7 @@ config X86
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS
select HAVE_DYNAMIC_FTRACE_WITH_ARGS if X86_64
select HAVE_FTRACE_REGS_HAVING_PT_REGS if X86_64
select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
select HAVE_SAMPLE_FTRACE_DIRECT if X86_64
select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if X86_64
@ -233,8 +234,9 @@ config X86
select HAVE_EXIT_THREAD
select HAVE_GUP_FAST
select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
select HAVE_FTRACE_GRAPH_FUNC if HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_FREGS if HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_TRACER if X86_32 || (X86_64 && DYNAMIC_FTRACE)
select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS

View File

@ -10,5 +10,6 @@ generated-y += unistd_64_x32.h
generated-y += xen-hypercalls.h
generic-y += early_ioremap.h
generic-y += fprobe.h
generic-y += mcs_spinlock.h
generic-y += mmzone.h

View File

@ -34,6 +34,27 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
return addr;
}
static inline unsigned long arch_ftrace_get_symaddr(unsigned long fentry_ip)
{
#ifdef CONFIG_X86_KERNEL_IBT
u32 instr;
/* We want to be extra safe in case entry ip is on the page edge,
* but otherwise we need to avoid get_kernel_nofault()'s overhead.
*/
if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
return fentry_ip;
} else {
instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
}
if (is_endbr(instr))
fentry_ip -= ENDBR_INSN_SIZE;
#endif
return fentry_ip;
}
#define ftrace_get_symaddr(fentry_ip) arch_ftrace_get_symaddr(fentry_ip)
#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
#include <linux/ftrace_regs.h>
@ -47,10 +68,23 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs)
return &arch_ftrace_regs(fregs)->regs;
}
#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \
(_regs)->ip = arch_ftrace_regs(fregs)->regs.ip; \
(_regs)->sp = arch_ftrace_regs(fregs)->regs.sp; \
(_regs)->cs = __KERNEL_CS; \
(_regs)->flags = 0; \
} while (0)
#define ftrace_regs_set_instruction_pointer(fregs, _ip) \
do { arch_ftrace_regs(fregs)->regs.ip = (_ip); } while (0)
static __always_inline unsigned long
ftrace_regs_get_return_address(struct ftrace_regs *fregs)
{
return *(unsigned long *)ftrace_regs_get_stack_pointer(fregs);
}
struct ftrace_ops;
#define ftrace_graph_func ftrace_graph_func
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
@ -134,24 +168,4 @@ static inline bool arch_trace_is_compat_syscall(struct pt_regs *regs)
#endif /* !COMPILE_OFFSETS */
#endif /* !__ASSEMBLY__ */
#ifndef __ASSEMBLY__
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
struct fgraph_ret_regs {
unsigned long ax;
unsigned long dx;
unsigned long bp;
};
static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->ax;
}
static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->bp;
}
#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */
#endif
#endif /* _ASM_X86_FTRACE_H */

View File

@ -607,16 +607,8 @@ int ftrace_disable_ftrace_graph_caller(void)
}
#endif /* CONFIG_DYNAMIC_FTRACE && !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */
/*
* Hook the return address and push it in the stack of return addrs
* in current thread info.
*/
void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
unsigned long frame_pointer)
static inline bool skip_ftrace_return(void)
{
unsigned long return_hooker = (unsigned long)&return_to_handler;
int bit;
/*
* When resuming from suspend-to-ram, this function can be indirectly
* called from early CPU startup code while the CPU is in real mode,
@ -626,23 +618,31 @@ void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
* This check isn't as accurate as virt_addr_valid(), but it should be
* good enough for this purpose, and it's fast.
*/
if (unlikely((long)__builtin_frame_address(0) >= 0))
return;
if ((long)__builtin_frame_address(0) >= 0)
return true;
if (unlikely(ftrace_graph_is_dead()))
return;
if (ftrace_graph_is_dead())
return true;
if (unlikely(atomic_read(&current->tracing_graph_pause)))
return;
if (atomic_read(&current->tracing_graph_pause))
return true;
return false;
}
bit = ftrace_test_recursion_trylock(ip, *parent);
if (bit < 0)
/*
* Hook the return address and push it in the stack of return addrs
* in current thread info.
*/
void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
unsigned long frame_pointer)
{
unsigned long return_hooker = (unsigned long)&return_to_handler;
if (unlikely(skip_ftrace_return()))
return;
if (!function_graph_enter(*parent, ip, frame_pointer, parent))
*parent = return_hooker;
ftrace_test_recursion_unlock(bit);
}
#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
@ -651,8 +651,15 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
{
struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs;
unsigned long *stack = (unsigned long *)kernel_stack_pointer(regs);
unsigned long return_hooker = (unsigned long)&return_to_handler;
unsigned long *parent = (unsigned long *)stack;
prepare_ftrace_return(ip, (unsigned long *)stack, 0);
if (unlikely(skip_ftrace_return()))
return;
if (!function_graph_enter_regs(*parent, ip, 0, parent, fregs))
*parent = return_hooker;
}
#endif

View File

@ -187,14 +187,15 @@ SYM_CODE_END(ftrace_graph_caller)
.globl return_to_handler
return_to_handler:
pushl $0
pushl %edx
pushl %eax
subl $(PTREGS_SIZE), %esp
movl $0, PT_EBP(%esp)
movl %edx, PT_EDX(%esp)
movl %eax, PT_EAX(%esp)
movl %esp, %eax
call ftrace_return_to_handler
movl %eax, %ecx
popl %eax
popl %edx
addl $4, %esp # skip ebp
movl PT_EAX(%esp), %eax
movl PT_EDX(%esp), %edx
addl $(PTREGS_SIZE), %esp
JMP_NOSPEC ecx
#endif

View File

@ -348,21 +348,22 @@ STACK_FRAME_NON_STANDARD_FP(__fentry__)
SYM_CODE_START(return_to_handler)
UNWIND_HINT_UNDEFINED
ANNOTATE_NOENDBR
subq $24, %rsp
/* Save the return values */
movq %rax, (%rsp)
movq %rdx, 8(%rsp)
movq %rbp, 16(%rsp)
/* Save ftrace_regs for function exit context */
subq $(FRAME_SIZE), %rsp
movq %rax, RAX(%rsp)
movq %rdx, RDX(%rsp)
movq %rbp, RBP(%rsp)
movq %rsp, %rdi
call ftrace_return_to_handler
movq %rax, %rdi
movq 8(%rsp), %rdx
movq (%rsp), %rax
movq RDX(%rsp), %rdx
movq RAX(%rsp), %rax
addq $24, %rsp
addq $(FRAME_SIZE), %rsp
/*
* Jump back to the old return address. This cannot be JMP_NOSPEC rdi
* since IBT would demand that contain ENDBR, which simply isn't so for

View File

@ -0,0 +1,46 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Generic arch dependent fprobe macros.
*/
#ifndef __ASM_GENERIC_FPROBE_H__
#define __ASM_GENERIC_FPROBE_H__
#include <linux/bits.h>
#ifdef CONFIG_64BIT
/*
* Encoding the size and the address of fprobe into one 64bit entry.
* The 32bit architectures should use 2 entries to store those info.
*/
#define ARCH_DEFINE_ENCODE_FPROBE_HEADER
#define FPROBE_HEADER_MSB_SIZE_SHIFT (BITS_PER_LONG - FPROBE_DATA_SIZE_BITS)
#define FPROBE_HEADER_MSB_MASK \
GENMASK(FPROBE_HEADER_MSB_SIZE_SHIFT - 1, 0)
/*
* By default, this expects the MSBs in the address of kprobe is 0xf.
* If any arch needs another fixed pattern (e.g. s390 is zero filled),
* override this.
*/
#define FPROBE_HEADER_MSB_PATTERN \
GENMASK(BITS_PER_LONG - 1, FPROBE_HEADER_MSB_SIZE_SHIFT)
#define arch_fprobe_header_encodable(fp) \
(((unsigned long)(fp) & ~FPROBE_HEADER_MSB_MASK) == \
FPROBE_HEADER_MSB_PATTERN)
#define arch_encode_fprobe_header(fp, size) \
(((unsigned long)(fp) & FPROBE_HEADER_MSB_MASK) | \
((unsigned long)(size) << FPROBE_HEADER_MSB_SIZE_SHIFT))
#define arch_decode_fprobe_header_size(val) \
((unsigned long)(val) >> FPROBE_HEADER_MSB_SIZE_SHIFT)
#define arch_decode_fprobe_header_fp(val) \
((struct fprobe *)(((unsigned long)(val) & FPROBE_HEADER_MSB_MASK) | \
FPROBE_HEADER_MSB_PATTERN))
#endif /* CONFIG_64BIT */
#endif /* __ASM_GENERIC_FPROBE_H__ */

View File

@ -5,47 +5,68 @@
#include <linux/compiler.h>
#include <linux/ftrace.h>
#include <linux/rethook.h>
#include <linux/rcupdate.h>
#include <linux/refcount.h>
#include <linux/slab.h>
struct fprobe;
typedef int (*fprobe_entry_cb)(struct fprobe *fp, unsigned long entry_ip,
unsigned long ret_ip, struct pt_regs *regs,
unsigned long ret_ip, struct ftrace_regs *regs,
void *entry_data);
typedef void (*fprobe_exit_cb)(struct fprobe *fp, unsigned long entry_ip,
unsigned long ret_ip, struct pt_regs *regs,
unsigned long ret_ip, struct ftrace_regs *regs,
void *entry_data);
/**
* struct fprobe_hlist_node - address based hash list node for fprobe.
*
* @hlist: The hlist node for address search hash table.
* @addr: One of the probing address of @fp.
* @fp: The fprobe which owns this.
*/
struct fprobe_hlist_node {
struct hlist_node hlist;
unsigned long addr;
struct fprobe *fp;
};
/**
* struct fprobe_hlist - hash list nodes for fprobe.
*
* @hlist: The hlist node for existence checking hash table.
* @rcu: rcu_head for RCU deferred release.
* @fp: The fprobe which owns this fprobe_hlist.
* @size: The size of @array.
* @array: The fprobe_hlist_node for each address to probe.
*/
struct fprobe_hlist {
struct hlist_node hlist;
struct rcu_head rcu;
struct fprobe *fp;
int size;
struct fprobe_hlist_node array[] __counted_by(size);
};
/**
* struct fprobe - ftrace based probe.
* @ops: The ftrace_ops.
*
* @nmissed: The counter for missing events.
* @flags: The status flag.
* @rethook: The rethook data structure. (internal data)
* @entry_data_size: The private data storage size.
* @nr_maxactive: The max number of active functions.
* @entry_handler: The callback function for function entry.
* @exit_handler: The callback function for function exit.
* @hlist_array: The fprobe_hlist for fprobe search from IP hash table.
*/
struct fprobe {
#ifdef CONFIG_FUNCTION_TRACER
/*
* If CONFIG_FUNCTION_TRACER is not set, CONFIG_FPROBE is disabled too.
* But user of fprobe may keep embedding the struct fprobe on their own
* code. To avoid build error, this will keep the fprobe data structure
* defined here, but remove ftrace_ops data structure.
*/
struct ftrace_ops ops;
#endif
unsigned long nmissed;
unsigned int flags;
struct rethook *rethook;
size_t entry_data_size;
int nr_maxactive;
fprobe_entry_cb entry_handler;
fprobe_exit_cb exit_handler;
struct fprobe_hlist *hlist_array;
};
/* This fprobe is soft-disabled. */
@ -121,4 +142,9 @@ static inline void enable_fprobe(struct fprobe *fp)
fp->flags &= ~FPROBE_FL_DISABLED;
}
/* The entry data size is 4 bits (=16) * sizeof(long) in maximum */
#define FPROBE_DATA_SIZE_BITS 4
#define MAX_FPROBE_DATA_SIZE_WORD ((1L << FPROBE_DATA_SIZE_BITS) - 1)
#define MAX_FPROBE_DATA_SIZE (MAX_FPROBE_DATA_SIZE_WORD * sizeof(long))
#endif

View File

@ -43,9 +43,8 @@ struct dyn_ftrace;
char *arch_ftrace_match_adjust(char *str, const char *search);
#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL
struct fgraph_ret_regs;
unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs);
#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FREGS
unsigned long ftrace_return_to_handler(struct ftrace_regs *fregs);
#else
unsigned long ftrace_return_to_handler(unsigned long frame_pointer);
#endif
@ -134,6 +133,13 @@ extern int ftrace_enabled;
* Also, architecture dependent fields can be used for internal process.
* (e.g. orig_ax on x86_64)
*
* Basically, ftrace_regs stores the registers related to the context.
* On function entry, registers for function parameters and hooking the
* function call are stored, and on function exit, registers for function
* return value and frame pointers are stored.
*
* And also, it dpends on the context that which registers are restored
* from the ftrace_regs.
* On the function entry, those registers will be restored except for
* the stack pointer, so that user can change the function parameters
* and instruction pointer (e.g. live patching.)
@ -170,6 +176,12 @@ static inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *fregs)
#define ftrace_regs_set_instruction_pointer(fregs, ip) do { } while (0)
#endif /* CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */
#ifdef CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS
static_assert(sizeof(struct pt_regs) == ftrace_regs_size());
#endif /* CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS */
static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs)
{
if (!fregs)
@ -178,6 +190,54 @@ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs
return arch_ftrace_get_regs(fregs);
}
#if !defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS) || \
defined(CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS)
static __always_inline struct pt_regs *
ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
{
/*
* If CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS=y, ftrace_regs memory
* layout is including pt_regs. So always returns that address.
* Since arch_ftrace_get_regs() will check some members and may return
* NULL, we can not use it.
*/
return &arch_ftrace_regs(fregs)->regs;
}
#endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS */
#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
/*
* Please define arch dependent pt_regs which compatible to the
* perf_arch_fetch_caller_regs() but based on ftrace_regs.
* This requires
* - user_mode(_regs) returns false (always kernel mode).
* - able to use the _regs for stack trace.
*/
#ifndef arch_ftrace_fill_perf_regs
/* As same as perf_arch_fetch_caller_regs(), do nothing by default */
#define arch_ftrace_fill_perf_regs(fregs, _regs) do {} while (0)
#endif
static __always_inline struct pt_regs *
ftrace_fill_perf_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
{
arch_ftrace_fill_perf_regs(fregs, regs);
return regs;
}
#else /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */
static __always_inline struct pt_regs *
ftrace_fill_perf_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
{
return &arch_ftrace_regs(fregs)->regs;
}
#endif
/*
* When true, the ftrace_regs_{get,set}_*() functions may be used on fregs.
* Note: this can be true even when ftrace_get_regs() cannot provide a pt_regs.
@ -190,6 +250,23 @@ static __always_inline bool ftrace_regs_has_args(struct ftrace_regs *fregs)
return ftrace_get_regs(fregs) != NULL;
}
#ifdef CONFIG_HAVE_REGS_AND_STACK_ACCESS_API
static __always_inline unsigned long
ftrace_regs_get_kernel_stack_nth(struct ftrace_regs *fregs, unsigned int nth)
{
unsigned long *stackp;
stackp = (unsigned long *)ftrace_regs_get_stack_pointer(fregs);
if (((unsigned long)(stackp + nth) & ~(THREAD_SIZE - 1)) ==
((unsigned long)stackp & ~(THREAD_SIZE - 1)))
return *(stackp + nth);
return 0;
}
#else /* !CONFIG_HAVE_REGS_AND_STACK_ACCESS_API */
#define ftrace_regs_get_kernel_stack_nth(fregs, nth) (0L)
#endif /* CONFIG_HAVE_REGS_AND_STACK_ACCESS_API */
typedef void (*ftrace_func_t)(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs);
@ -545,6 +622,19 @@ enum {
FTRACE_MAY_SLEEP = (1 << 5),
};
/* Arches can override ftrace_get_symaddr() to convert fentry_ip to symaddr. */
#ifndef ftrace_get_symaddr
/**
* ftrace_get_symaddr - return the symbol address from fentry_ip
* @fentry_ip: the address of ftrace location
*
* Get the symbol address from @fentry_ip (fast path). If there is no fast
* search path, this returns 0.
* User may need to use kallsyms API to find the symbol address.
*/
#define ftrace_get_symaddr(fentry_ip) (0)
#endif
#ifdef CONFIG_DYNAMIC_FTRACE
void ftrace_arch_code_modify_prepare(void);
@ -1069,12 +1159,15 @@ struct fgraph_ops;
/* Type of the callback handlers for tracing function graph*/
typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *,
struct fgraph_ops *); /* return */
struct fgraph_ops *,
struct ftrace_regs *); /* return */
typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *,
struct fgraph_ops *); /* entry */
struct fgraph_ops *,
struct ftrace_regs *); /* entry */
extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops);
struct fgraph_ops *gops,
struct ftrace_regs *fregs);
bool ftrace_pids_enabled(struct ftrace_ops *ops);
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
@ -1114,8 +1207,15 @@ struct ftrace_ret_stack {
extern void return_to_handler(void);
extern int
function_graph_enter(unsigned long ret, unsigned long func,
unsigned long frame_pointer, unsigned long *retp);
function_graph_enter_regs(unsigned long ret, unsigned long func,
unsigned long frame_pointer, unsigned long *retp,
struct ftrace_regs *fregs);
static inline int function_graph_enter(unsigned long ret, unsigned long func,
unsigned long fp, unsigned long *retp)
{
return function_graph_enter_regs(ret, func, fp, retp, NULL);
}
struct ftrace_ret_stack *
ftrace_graph_get_ret_stack(struct task_struct *task, int skip);

View File

@ -30,6 +30,8 @@ struct ftrace_regs;
override_function_with_return(&arch_ftrace_regs(fregs)->regs)
#define ftrace_regs_query_register_offset(name) \
regs_query_register_offset(name)
#define ftrace_regs_get_frame_pointer(fregs) \
frame_pointer(&arch_ftrace_regs(fregs)->regs)
#endif /* HAVE_ARCH_FTRACE_REGS */

View File

@ -31,9 +31,14 @@ config HAVE_FUNCTION_GRAPH_TRACER
help
See Documentation/trace/ftrace-design.rst
config HAVE_FUNCTION_GRAPH_RETVAL
config HAVE_FUNCTION_GRAPH_FREGS
bool
config HAVE_FTRACE_GRAPH_FUNC
bool
help
True if ftrace_graph_func() is defined.
config HAVE_DYNAMIC_FTRACE
bool
help
@ -57,6 +62,12 @@ config HAVE_DYNAMIC_FTRACE_WITH_ARGS
This allows for use of ftrace_regs_get_argument() and
ftrace_regs_get_stack_pointer().
config HAVE_FTRACE_REGS_HAVING_PT_REGS
bool
help
If this is set, ftrace_regs has pt_regs, thus it can convert to
pt_regs without allocating memory.
config HAVE_DYNAMIC_FTRACE_NO_PATCHABLE
bool
help
@ -232,7 +243,7 @@ config FUNCTION_GRAPH_TRACER
config FUNCTION_GRAPH_RETVAL
bool "Kernel Function Graph Return Value"
depends on HAVE_FUNCTION_GRAPH_RETVAL
depends on HAVE_FUNCTION_GRAPH_FREGS
depends on FUNCTION_GRAPH_TRACER
default n
help
@ -296,10 +307,9 @@ config DYNAMIC_FTRACE_WITH_ARGS
config FPROBE
bool "Kernel Function Probe (fprobe)"
depends on FUNCTION_TRACER
depends on DYNAMIC_FTRACE_WITH_REGS
depends on HAVE_RETHOOK
select RETHOOK
depends on HAVE_FUNCTION_GRAPH_FREGS && HAVE_FTRACE_GRAPH_FUNC
depends on DYNAMIC_FTRACE_WITH_ARGS
select FUNCTION_GRAPH_TRACER
default n
help
This option enables kernel function probe (fprobe) based on ftrace.

View File

@ -2585,6 +2585,20 @@ struct user_syms {
char *buf;
};
#ifndef CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS
static DEFINE_PER_CPU(struct pt_regs, bpf_kprobe_multi_pt_regs);
#define bpf_kprobe_multi_pt_regs_ptr() this_cpu_ptr(&bpf_kprobe_multi_pt_regs)
#else
#define bpf_kprobe_multi_pt_regs_ptr() (NULL)
#endif
static unsigned long ftrace_get_entry_ip(unsigned long fentry_ip)
{
unsigned long ip = ftrace_get_symaddr(fentry_ip);
return ip ? : fentry_ip;
}
static int copy_user_syms(struct user_syms *us, unsigned long __user *usyms, u32 cnt)
{
unsigned long __user usymbol;
@ -2779,7 +2793,7 @@ static u64 bpf_kprobe_multi_entry_ip(struct bpf_run_ctx *ctx)
static int
kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
unsigned long entry_ip, struct pt_regs *regs,
unsigned long entry_ip, struct ftrace_regs *fregs,
bool is_return, void *data)
{
struct bpf_kprobe_multi_run_ctx run_ctx = {
@ -2791,6 +2805,7 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
.entry_ip = entry_ip,
};
struct bpf_run_ctx *old_run_ctx;
struct pt_regs *regs;
int err;
if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
@ -2801,6 +2816,7 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
migrate_disable();
rcu_read_lock();
regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr());
old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx);
err = bpf_prog_run(link->link.prog, regs);
bpf_reset_run_ctx(old_run_ctx);
@ -2814,26 +2830,28 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
static int
kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip,
unsigned long ret_ip, struct pt_regs *regs,
unsigned long ret_ip, struct ftrace_regs *fregs,
void *data)
{
struct bpf_kprobe_multi_link *link;
int err;
link = container_of(fp, struct bpf_kprobe_multi_link, fp);
err = kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs, false, data);
err = kprobe_multi_link_prog_run(link, ftrace_get_entry_ip(fentry_ip),
fregs, false, data);
return is_kprobe_session(link->link.prog) ? err : 0;
}
static void
kprobe_multi_link_exit_handler(struct fprobe *fp, unsigned long fentry_ip,
unsigned long ret_ip, struct pt_regs *regs,
unsigned long ret_ip, struct ftrace_regs *fregs,
void *data)
{
struct bpf_kprobe_multi_link *link;
link = container_of(fp, struct bpf_kprobe_multi_link, fp);
kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs, true, data);
kprobe_multi_link_prog_run(link, ftrace_get_entry_ip(fentry_ip),
fregs, true, data);
}
static int symbols_cmp_r(const void *a, const void *b, const void *priv)

View File

@ -292,13 +292,15 @@ static inline unsigned long make_data_type_val(int idx, int size, int offset)
}
/* ftrace_graph_entry set to this to tell some archs to run function graph */
static int entry_run(struct ftrace_graph_ent *trace, struct fgraph_ops *ops)
static int entry_run(struct ftrace_graph_ent *trace, struct fgraph_ops *ops,
struct ftrace_regs *fregs)
{
return 0;
}
/* ftrace_graph_return set to this to tell some archs to run function graph */
static void return_run(struct ftrace_graph_ret *trace, struct fgraph_ops *ops)
static void return_run(struct ftrace_graph_ret *trace, struct fgraph_ops *ops,
struct ftrace_regs *fregs)
{
}
@ -520,13 +522,15 @@ int __weak ftrace_disable_ftrace_graph_caller(void)
#endif
int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
return 0;
}
static void ftrace_graph_ret_stub(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
}
@ -644,14 +648,20 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func,
#endif
/* If the caller does not use ftrace, call this function. */
int function_graph_enter(unsigned long ret, unsigned long func,
unsigned long frame_pointer, unsigned long *retp)
int function_graph_enter_regs(unsigned long ret, unsigned long func,
unsigned long frame_pointer, unsigned long *retp,
struct ftrace_regs *fregs)
{
struct ftrace_graph_ent trace;
unsigned long bitmap = 0;
int offset;
int bit;
int i;
bit = ftrace_test_recursion_trylock(func, ret);
if (bit < 0)
return -EBUSY;
trace.func = func;
trace.depth = ++current->curr_ret_depth;
@ -663,7 +673,7 @@ int function_graph_enter(unsigned long ret, unsigned long func,
if (static_branch_likely(&fgraph_do_direct)) {
int save_curr_ret_stack = current->curr_ret_stack;
if (static_call(fgraph_func)(&trace, fgraph_direct_gops))
if (static_call(fgraph_func)(&trace, fgraph_direct_gops, fregs))
bitmap |= BIT(fgraph_direct_gops->idx);
else
/* Clear out any saved storage */
@ -681,7 +691,7 @@ int function_graph_enter(unsigned long ret, unsigned long func,
save_curr_ret_stack = current->curr_ret_stack;
if (ftrace_ops_test(&gops->ops, func, NULL) &&
gops->entryfunc(&trace, gops))
gops->entryfunc(&trace, gops, fregs))
bitmap |= BIT(i);
else
/* Clear out any saved storage */
@ -697,12 +707,13 @@ int function_graph_enter(unsigned long ret, unsigned long func,
* flag, set that bit always.
*/
set_bitmap(current, offset, bitmap | BIT(0));
ftrace_test_recursion_unlock(bit);
return 0;
out_ret:
current->curr_ret_stack -= FGRAPH_FRAME_OFFSET + 1;
out:
current->curr_ret_depth--;
ftrace_test_recursion_unlock(bit);
return -EBUSY;
}
@ -792,15 +803,12 @@ static struct notifier_block ftrace_suspend_notifier = {
.notifier_call = ftrace_suspend_notifier_call,
};
/* fgraph_ret_regs is not defined without CONFIG_FUNCTION_GRAPH_RETVAL */
struct fgraph_ret_regs;
/*
* Send the trace to the ring-buffer.
* @return the original return address.
*/
static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs,
unsigned long frame_pointer)
static inline unsigned long
__ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointer)
{
struct ftrace_ret_stack *ret_stack;
struct ftrace_graph_ret trace;
@ -819,8 +827,11 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs
}
trace.rettime = trace_clock_local();
if (fregs)
ftrace_regs_set_instruction_pointer(fregs, ret);
#ifdef CONFIG_FUNCTION_GRAPH_RETVAL
trace.retval = fgraph_ret_regs_return_value(ret_regs);
trace.retval = ftrace_regs_get_return_value(fregs);
#endif
bitmap = get_bitmap_bits(current, offset);
@ -828,7 +839,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs
#ifdef CONFIG_HAVE_STATIC_CALL
if (static_branch_likely(&fgraph_do_direct)) {
if (test_bit(fgraph_direct_gops->idx, &bitmap))
static_call(fgraph_retfunc)(&trace, fgraph_direct_gops);
static_call(fgraph_retfunc)(&trace, fgraph_direct_gops, fregs);
} else
#endif
{
@ -838,7 +849,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs
if (gops == &fgraph_stub)
continue;
gops->retfunc(&trace, gops);
gops->retfunc(&trace, gops, fregs);
}
}
@ -855,14 +866,14 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs
}
/*
* After all architecures have selected HAVE_FUNCTION_GRAPH_RETVAL, we can
* leave only ftrace_return_to_handler(ret_regs).
* After all architecures have selected HAVE_FUNCTION_GRAPH_FREGS, we can
* leave only ftrace_return_to_handler(fregs).
*/
#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL
unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs)
#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FREGS
unsigned long ftrace_return_to_handler(struct ftrace_regs *fregs)
{
return __ftrace_return_to_handler(ret_regs,
fgraph_ret_regs_frame_pointer(ret_regs));
return __ftrace_return_to_handler(fregs,
ftrace_regs_get_frame_pointer(fregs));
}
#else
unsigned long ftrace_return_to_handler(unsigned long frame_pointer)
@ -1010,7 +1021,8 @@ void ftrace_graph_sleep_time_control(bool enable)
* Simply points to ftrace_stub, but with the proper protocol.
* Defined by the linker script in linux/vmlinux.lds.h
*/
void ftrace_stub_graph(struct ftrace_graph_ret *trace, struct fgraph_ops *gops);
void ftrace_stub_graph(struct ftrace_graph_ret *trace, struct fgraph_ops *gops,
struct ftrace_regs *fregs);
/* The callbacks that hook a function */
trace_func_graph_ret_t ftrace_graph_return = ftrace_stub_graph;
@ -1174,7 +1186,8 @@ void ftrace_graph_exit_task(struct task_struct *t)
#ifdef CONFIG_DYNAMIC_FTRACE
static int fgraph_pid_func(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
struct trace_array *tr = gops->ops.private;
int pid;
@ -1188,7 +1201,7 @@ static int fgraph_pid_func(struct ftrace_graph_ent *trace,
return 0;
}
return gops->saved_func(trace, gops);
return gops->saved_func(trace, gops, fregs);
}
void fgraph_update_pid_func(void)

View File

@ -8,98 +8,224 @@
#include <linux/fprobe.h>
#include <linux/kallsyms.h>
#include <linux/kprobes.h>
#include <linux/rethook.h>
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/sort.h>
#include <asm/fprobe.h>
#include "trace.h"
struct fprobe_rethook_node {
struct rethook_node node;
unsigned long entry_ip;
unsigned long entry_parent_ip;
char data[];
};
#define FPROBE_IP_HASH_BITS 8
#define FPROBE_IP_TABLE_SIZE (1 << FPROBE_IP_HASH_BITS)
static inline void __fprobe_handler(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *ops, struct ftrace_regs *fregs)
#define FPROBE_HASH_BITS 6
#define FPROBE_TABLE_SIZE (1 << FPROBE_HASH_BITS)
#define SIZE_IN_LONG(x) ((x + sizeof(long) - 1) >> (sizeof(long) == 8 ? 3 : 2))
/*
* fprobe_table: hold 'fprobe_hlist::hlist' for checking the fprobe still
* exists. The key is the address of fprobe instance.
* fprobe_ip_table: hold 'fprobe_hlist::array[*]' for searching the fprobe
* instance related to the funciton address. The key is the ftrace IP
* address.
*
* When unregistering the fprobe, fprobe_hlist::fp and fprobe_hlist::array[*].fp
* are set NULL and delete those from both hash tables (by hlist_del_rcu).
* After an RCU grace period, the fprobe_hlist itself will be released.
*
* fprobe_table and fprobe_ip_table can be accessed from either
* - Normal hlist traversal and RCU add/del under 'fprobe_mutex' is held.
* - RCU hlist traversal under disabling preempt
*/
static struct hlist_head fprobe_table[FPROBE_TABLE_SIZE];
static struct hlist_head fprobe_ip_table[FPROBE_IP_TABLE_SIZE];
static DEFINE_MUTEX(fprobe_mutex);
/*
* Find first fprobe in the hlist. It will be iterated twice in the entry
* probe, once for correcting the total required size, the second time is
* calling back the user handlers.
* Thus the hlist in the fprobe_table must be sorted and new probe needs to
* be added *before* the first fprobe.
*/
static struct fprobe_hlist_node *find_first_fprobe_node(unsigned long ip)
{
struct fprobe_rethook_node *fpr;
struct rethook_node *rh = NULL;
struct fprobe *fp;
void *entry_data = NULL;
int ret = 0;
struct fprobe_hlist_node *node;
struct hlist_head *head;
fp = container_of(ops, struct fprobe, ops);
if (fp->exit_handler) {
rh = rethook_try_get(fp->rethook);
if (!rh) {
fp->nmissed++;
return;
}
fpr = container_of(rh, struct fprobe_rethook_node, node);
fpr->entry_ip = ip;
fpr->entry_parent_ip = parent_ip;
if (fp->entry_data_size)
entry_data = fpr->data;
head = &fprobe_ip_table[hash_ptr((void *)ip, FPROBE_IP_HASH_BITS)];
hlist_for_each_entry_rcu(node, head, hlist,
lockdep_is_held(&fprobe_mutex)) {
if (node->addr == ip)
return node;
}
return NULL;
}
NOKPROBE_SYMBOL(find_first_fprobe_node);
if (fp->entry_handler)
ret = fp->entry_handler(fp, ip, parent_ip, ftrace_get_regs(fregs), entry_data);
/* Node insertion and deletion requires the fprobe_mutex */
static void insert_fprobe_node(struct fprobe_hlist_node *node)
{
unsigned long ip = node->addr;
struct fprobe_hlist_node *next;
struct hlist_head *head;
/* If entry_handler returns !0, nmissed is not counted. */
if (rh) {
if (ret)
rethook_recycle(rh);
else
rethook_hook(rh, ftrace_get_regs(fregs), true);
lockdep_assert_held(&fprobe_mutex);
next = find_first_fprobe_node(ip);
if (next) {
hlist_add_before_rcu(&node->hlist, &next->hlist);
return;
}
head = &fprobe_ip_table[hash_ptr((void *)ip, FPROBE_IP_HASH_BITS)];
hlist_add_head_rcu(&node->hlist, head);
}
static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *ops, struct ftrace_regs *fregs)
/* Return true if there are synonims */
static bool delete_fprobe_node(struct fprobe_hlist_node *node)
{
struct fprobe *fp;
int bit;
fp = container_of(ops, struct fprobe, ops);
if (fprobe_disabled(fp))
return;
/* recursion detection has to go before any traceable function and
* all functions before this point should be marked as notrace
*/
bit = ftrace_test_recursion_trylock(ip, parent_ip);
if (bit < 0) {
fp->nmissed++;
return;
}
__fprobe_handler(ip, parent_ip, ops, fregs);
ftrace_test_recursion_unlock(bit);
lockdep_assert_held(&fprobe_mutex);
WRITE_ONCE(node->fp, NULL);
hlist_del_rcu(&node->hlist);
return !!find_first_fprobe_node(node->addr);
}
NOKPROBE_SYMBOL(fprobe_handler);
static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *ops, struct ftrace_regs *fregs)
/* Check existence of the fprobe */
static bool is_fprobe_still_exist(struct fprobe *fp)
{
struct fprobe *fp;
int bit;
struct hlist_head *head;
struct fprobe_hlist *fph;
fp = container_of(ops, struct fprobe, ops);
if (fprobe_disabled(fp))
return;
/* recursion detection has to go before any traceable function and
* all functions called before this point should be marked as notrace
*/
bit = ftrace_test_recursion_trylock(ip, parent_ip);
if (bit < 0) {
fp->nmissed++;
return;
head = &fprobe_table[hash_ptr(fp, FPROBE_HASH_BITS)];
hlist_for_each_entry_rcu(fph, head, hlist,
lockdep_is_held(&fprobe_mutex)) {
if (fph->fp == fp)
return true;
}
return false;
}
NOKPROBE_SYMBOL(is_fprobe_still_exist);
static int add_fprobe_hash(struct fprobe *fp)
{
struct fprobe_hlist *fph = fp->hlist_array;
struct hlist_head *head;
lockdep_assert_held(&fprobe_mutex);
if (WARN_ON_ONCE(!fph))
return -EINVAL;
if (is_fprobe_still_exist(fp))
return -EEXIST;
head = &fprobe_table[hash_ptr(fp, FPROBE_HASH_BITS)];
hlist_add_head_rcu(&fp->hlist_array->hlist, head);
return 0;
}
static int del_fprobe_hash(struct fprobe *fp)
{
struct fprobe_hlist *fph = fp->hlist_array;
lockdep_assert_held(&fprobe_mutex);
if (WARN_ON_ONCE(!fph))
return -EINVAL;
if (!is_fprobe_still_exist(fp))
return -ENOENT;
fph->fp = NULL;
hlist_del_rcu(&fph->hlist);
return 0;
}
#ifdef ARCH_DEFINE_ENCODE_FPROBE_HEADER
/* The arch should encode fprobe_header info into one unsigned long */
#define FPROBE_HEADER_SIZE_IN_LONG 1
static inline bool write_fprobe_header(unsigned long *stack,
struct fprobe *fp, unsigned int size_words)
{
if (WARN_ON_ONCE(size_words > MAX_FPROBE_DATA_SIZE_WORD ||
!arch_fprobe_header_encodable(fp)))
return false;
*stack = arch_encode_fprobe_header(fp, size_words);
return true;
}
static inline void read_fprobe_header(unsigned long *stack,
struct fprobe **fp, unsigned int *size_words)
{
*fp = arch_decode_fprobe_header_fp(*stack);
*size_words = arch_decode_fprobe_header_size(*stack);
}
#else
/* Generic fprobe_header */
struct __fprobe_header {
struct fprobe *fp;
unsigned long size_words;
} __packed;
#define FPROBE_HEADER_SIZE_IN_LONG SIZE_IN_LONG(sizeof(struct __fprobe_header))
static inline bool write_fprobe_header(unsigned long *stack,
struct fprobe *fp, unsigned int size_words)
{
struct __fprobe_header *fph = (struct __fprobe_header *)stack;
if (WARN_ON_ONCE(size_words > MAX_FPROBE_DATA_SIZE_WORD))
return false;
fph->fp = fp;
fph->size_words = size_words;
return true;
}
static inline void read_fprobe_header(unsigned long *stack,
struct fprobe **fp, unsigned int *size_words)
{
struct __fprobe_header *fph = (struct __fprobe_header *)stack;
*fp = fph->fp;
*size_words = fph->size_words;
}
#endif
/*
* fprobe shadow stack management:
* Since fprobe shares a single fgraph_ops, it needs to share the stack entry
* among the probes on the same function exit. Note that a new probe can be
* registered before a target function is returning, we can not use the hash
* table to find the corresponding probes. Thus the probe address is stored on
* the shadow stack with its entry data size.
*
*/
static inline int __fprobe_handler(unsigned long ip, unsigned long parent_ip,
struct fprobe *fp, struct ftrace_regs *fregs,
void *data)
{
if (!fp->entry_handler)
return 0;
return fp->entry_handler(fp, ip, parent_ip, fregs, data);
}
static inline int __fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip,
struct fprobe *fp, struct ftrace_regs *fregs,
void *data)
{
int ret;
/*
* This user handler is shared with other kprobes and is not expected to be
* called recursively. So if any other kprobe handler is running, this will
@ -108,44 +234,183 @@ static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip,
*/
if (unlikely(kprobe_running())) {
fp->nmissed++;
goto recursion_unlock;
return 0;
}
kprobe_busy_begin();
__fprobe_handler(ip, parent_ip, ops, fregs);
ret = __fprobe_handler(ip, parent_ip, fp, fregs, data);
kprobe_busy_end();
recursion_unlock:
ftrace_test_recursion_unlock(bit);
return ret;
}
static void fprobe_exit_handler(struct rethook_node *rh, void *data,
unsigned long ret_ip, struct pt_regs *regs)
static int fprobe_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
struct fprobe *fp = (struct fprobe *)data;
struct fprobe_rethook_node *fpr;
int bit;
struct fprobe_hlist_node *node, *first;
unsigned long *fgraph_data = NULL;
unsigned long func = trace->func;
unsigned long ret_ip;
int reserved_words;
struct fprobe *fp;
int used, ret;
if (!fp || fprobe_disabled(fp))
return;
if (WARN_ON_ONCE(!fregs))
return 0;
fpr = container_of(rh, struct fprobe_rethook_node, node);
first = node = find_first_fprobe_node(func);
if (unlikely(!first))
return 0;
reserved_words = 0;
hlist_for_each_entry_from_rcu(node, hlist) {
if (node->addr != func)
break;
fp = READ_ONCE(node->fp);
if (!fp || !fp->exit_handler)
continue;
/*
* Since fprobe can be enabled until the next loop, we ignore the
* fprobe's disabled flag in this loop.
*/
reserved_words +=
FPROBE_HEADER_SIZE_IN_LONG + SIZE_IN_LONG(fp->entry_data_size);
}
node = first;
if (reserved_words) {
fgraph_data = fgraph_reserve_data(gops->idx, reserved_words * sizeof(long));
if (unlikely(!fgraph_data)) {
hlist_for_each_entry_from_rcu(node, hlist) {
if (node->addr != func)
break;
fp = READ_ONCE(node->fp);
if (fp && !fprobe_disabled(fp))
fp->nmissed++;
}
return 0;
}
}
/*
* we need to assure no calls to traceable functions in-between the
* end of fprobe_handler and the beginning of fprobe_exit_handler.
* TODO: recursion detection has been done in the fgraph. Thus we need
* to add a callback to increment missed counter.
*/
bit = ftrace_test_recursion_trylock(fpr->entry_ip, fpr->entry_parent_ip);
if (bit < 0) {
fp->nmissed++;
ret_ip = ftrace_regs_get_return_address(fregs);
used = 0;
hlist_for_each_entry_from_rcu(node, hlist) {
int data_size;
void *data;
if (node->addr != func)
break;
fp = READ_ONCE(node->fp);
if (!fp || fprobe_disabled(fp))
continue;
data_size = fp->entry_data_size;
if (data_size && fp->exit_handler)
data = fgraph_data + used + FPROBE_HEADER_SIZE_IN_LONG;
else
data = NULL;
if (fprobe_shared_with_kprobes(fp))
ret = __fprobe_kprobe_handler(func, ret_ip, fp, fregs, data);
else
ret = __fprobe_handler(func, ret_ip, fp, fregs, data);
/* If entry_handler returns !0, nmissed is not counted but skips exit_handler. */
if (!ret && fp->exit_handler) {
int size_words = SIZE_IN_LONG(data_size);
if (write_fprobe_header(&fgraph_data[used], fp, size_words))
used += FPROBE_HEADER_SIZE_IN_LONG + size_words;
}
}
if (used < reserved_words)
memset(fgraph_data + used, 0, reserved_words - used);
/* If any exit_handler is set, data must be used. */
return used != 0;
}
NOKPROBE_SYMBOL(fprobe_entry);
static void fprobe_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
unsigned long *fgraph_data = NULL;
unsigned long ret_ip;
struct fprobe *fp;
int size, curr;
int size_words;
fgraph_data = (unsigned long *)fgraph_retrieve_data(gops->idx, &size);
if (WARN_ON_ONCE(!fgraph_data))
return;
size_words = SIZE_IN_LONG(size);
ret_ip = ftrace_regs_get_instruction_pointer(fregs);
preempt_disable();
curr = 0;
while (size_words > curr) {
read_fprobe_header(&fgraph_data[curr], &fp, &size);
if (!fp)
break;
curr += FPROBE_HEADER_SIZE_IN_LONG;
if (is_fprobe_still_exist(fp) && !fprobe_disabled(fp)) {
if (WARN_ON_ONCE(curr + size > size_words))
break;
fp->exit_handler(fp, trace->func, ret_ip, fregs,
size ? fgraph_data + curr : NULL);
}
curr += size;
}
preempt_enable();
}
NOKPROBE_SYMBOL(fprobe_return);
static struct fgraph_ops fprobe_graph_ops = {
.entryfunc = fprobe_entry,
.retfunc = fprobe_return,
};
static int fprobe_graph_active;
/* Add @addrs to the ftrace filter and register fgraph if needed. */
static int fprobe_graph_add_ips(unsigned long *addrs, int num)
{
int ret;
lockdep_assert_held(&fprobe_mutex);
ret = ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 0, 0);
if (ret)
return ret;
if (!fprobe_graph_active) {
ret = register_ftrace_graph(&fprobe_graph_ops);
if (WARN_ON_ONCE(ret)) {
ftrace_free_filter(&fprobe_graph_ops.ops);
return ret;
}
}
fprobe_graph_active++;
return 0;
}
/* Remove @addrs from the ftrace filter and unregister fgraph if possible. */
static void fprobe_graph_remove_ips(unsigned long *addrs, int num)
{
lockdep_assert_held(&fprobe_mutex);
fprobe_graph_active--;
if (!fprobe_graph_active) {
/* Q: should we unregister it ? */
unregister_ftrace_graph(&fprobe_graph_ops);
return;
}
fp->exit_handler(fp, fpr->entry_ip, ret_ip, regs,
fp->entry_data_size ? (void *)fpr->data : NULL);
ftrace_test_recursion_unlock(bit);
ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 1, 0);
}
NOKPROBE_SYMBOL(fprobe_exit_handler);
static int symbols_cmp(const void *a, const void *b)
{
@ -175,53 +440,97 @@ static unsigned long *get_ftrace_locations(const char **syms, int num)
return ERR_PTR(-ENOENT);
}
static void fprobe_init(struct fprobe *fp)
struct filter_match_data {
const char *filter;
const char *notfilter;
size_t index;
size_t size;
unsigned long *addrs;
};
static int filter_match_callback(void *data, const char *name, unsigned long addr)
{
fp->nmissed = 0;
if (fprobe_shared_with_kprobes(fp))
fp->ops.func = fprobe_kprobe_handler;
else
fp->ops.func = fprobe_handler;
fp->ops.flags |= FTRACE_OPS_FL_SAVE_REGS;
struct filter_match_data *match = data;
if (!glob_match(match->filter, name) ||
(match->notfilter && glob_match(match->notfilter, name)))
return 0;
if (!ftrace_location(addr))
return 0;
if (match->addrs)
match->addrs[match->index] = addr;
match->index++;
return match->index == match->size;
}
static int fprobe_init_rethook(struct fprobe *fp, int num)
/*
* Make IP list from the filter/no-filter glob patterns.
* Return the number of matched symbols, or -ENOENT.
*/
static int ip_list_from_filter(const char *filter, const char *notfilter,
unsigned long *addrs, size_t size)
{
int size;
struct filter_match_data match = { .filter = filter, .notfilter = notfilter,
.index = 0, .size = size, .addrs = addrs};
int ret;
if (!fp->exit_handler) {
fp->rethook = NULL;
return 0;
}
ret = kallsyms_on_each_symbol(filter_match_callback, &match);
if (ret < 0)
return ret;
ret = module_kallsyms_on_each_symbol(NULL, filter_match_callback, &match);
if (ret < 0)
return ret;
/* Initialize rethook if needed */
if (fp->nr_maxactive)
num = fp->nr_maxactive;
else
num *= num_possible_cpus() * 2;
if (num <= 0)
return -EINVAL;
size = sizeof(struct fprobe_rethook_node) + fp->entry_data_size;
/* Initialize rethook */
fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler, size, num);
if (IS_ERR(fp->rethook))
return PTR_ERR(fp->rethook);
return 0;
return match.index ?: -ENOENT;
}
static void fprobe_fail_cleanup(struct fprobe *fp)
{
if (!IS_ERR_OR_NULL(fp->rethook)) {
/* Don't need to cleanup rethook->handler because this is not used. */
rethook_free(fp->rethook);
fp->rethook = NULL;
}
ftrace_free_filter(&fp->ops);
kfree(fp->hlist_array);
fp->hlist_array = NULL;
}
/* Initialize the fprobe data structure. */
static int fprobe_init(struct fprobe *fp, unsigned long *addrs, int num)
{
struct fprobe_hlist *hlist_array;
unsigned long addr;
int size, i;
if (!fp || !addrs || num <= 0)
return -EINVAL;
size = ALIGN(fp->entry_data_size, sizeof(long));
if (size > MAX_FPROBE_DATA_SIZE)
return -E2BIG;
fp->entry_data_size = size;
hlist_array = kzalloc(struct_size(hlist_array, array, num), GFP_KERNEL);
if (!hlist_array)
return -ENOMEM;
fp->nmissed = 0;
hlist_array->size = num;
fp->hlist_array = hlist_array;
hlist_array->fp = fp;
for (i = 0; i < num; i++) {
hlist_array->array[i].fp = fp;
addr = ftrace_location(addrs[i]);
if (!addr) {
fprobe_fail_cleanup(fp);
return -ENOENT;
}
hlist_array->array[i].addr = addr;
}
return 0;
}
#define FPROBE_IPS_MAX INT_MAX
/**
* register_fprobe() - Register fprobe to ftrace by pattern.
* @fp: A fprobe data structure to be registered.
@ -235,46 +544,24 @@ static void fprobe_fail_cleanup(struct fprobe *fp)
*/
int register_fprobe(struct fprobe *fp, const char *filter, const char *notfilter)
{
struct ftrace_hash *hash;
unsigned char *str;
int ret, len;
unsigned long *addrs;
int ret;
if (!fp || !filter)
return -EINVAL;
fprobe_init(fp);
len = strlen(filter);
str = kstrdup(filter, GFP_KERNEL);
ret = ftrace_set_filter(&fp->ops, str, len, 0);
kfree(str);
if (ret)
ret = ip_list_from_filter(filter, notfilter, NULL, FPROBE_IPS_MAX);
if (ret < 0)
return ret;
if (notfilter) {
len = strlen(notfilter);
str = kstrdup(notfilter, GFP_KERNEL);
ret = ftrace_set_notrace(&fp->ops, str, len, 0);
kfree(str);
if (ret)
goto out;
}
addrs = kcalloc(ret, sizeof(unsigned long), GFP_KERNEL);
if (!addrs)
return -ENOMEM;
ret = ip_list_from_filter(filter, notfilter, addrs, ret);
if (ret > 0)
ret = register_fprobe_ips(fp, addrs, ret);
/* TODO:
* correctly calculate the total number of filtered symbols
* from both filter and notfilter.
*/
hash = rcu_access_pointer(fp->ops.local_hash.filter_hash);
if (WARN_ON_ONCE(!hash))
goto out;
ret = fprobe_init_rethook(fp, (int)hash->count);
if (!ret)
ret = register_ftrace_function(&fp->ops);
out:
if (ret)
fprobe_fail_cleanup(fp);
kfree(addrs);
return ret;
}
EXPORT_SYMBOL_GPL(register_fprobe);
@ -282,7 +569,7 @@ EXPORT_SYMBOL_GPL(register_fprobe);
/**
* register_fprobe_ips() - Register fprobe to ftrace by address.
* @fp: A fprobe data structure to be registered.
* @addrs: An array of target ftrace location addresses.
* @addrs: An array of target function address.
* @num: The number of entries of @addrs.
*
* Register @fp to ftrace for enabling the probe on the address given by @addrs.
@ -294,23 +581,27 @@ EXPORT_SYMBOL_GPL(register_fprobe);
*/
int register_fprobe_ips(struct fprobe *fp, unsigned long *addrs, int num)
{
int ret;
struct fprobe_hlist *hlist_array;
int ret, i;
if (!fp || !addrs || num <= 0)
return -EINVAL;
fprobe_init(fp);
ret = ftrace_set_filter_ips(&fp->ops, addrs, num, 0, 0);
ret = fprobe_init(fp, addrs, num);
if (ret)
return ret;
ret = fprobe_init_rethook(fp, num);
if (!ret)
ret = register_ftrace_function(&fp->ops);
mutex_lock(&fprobe_mutex);
hlist_array = fp->hlist_array;
ret = fprobe_graph_add_ips(addrs, num);
if (!ret) {
add_fprobe_hash(fp);
for (i = 0; i < hlist_array->size; i++)
insert_fprobe_node(&hlist_array->array[i]);
}
mutex_unlock(&fprobe_mutex);
if (ret)
fprobe_fail_cleanup(fp);
return ret;
}
EXPORT_SYMBOL_GPL(register_fprobe_ips);
@ -348,14 +639,13 @@ EXPORT_SYMBOL_GPL(register_fprobe_syms);
bool fprobe_is_registered(struct fprobe *fp)
{
if (!fp || (fp->ops.saved_func != fprobe_handler &&
fp->ops.saved_func != fprobe_kprobe_handler))
if (!fp || !fp->hlist_array)
return false;
return true;
}
/**
* unregister_fprobe() - Unregister fprobe from ftrace
* unregister_fprobe() - Unregister fprobe.
* @fp: A fprobe data structure to be unregistered.
*
* Unregister fprobe (and remove ftrace hooks from the function entries).
@ -364,23 +654,41 @@ bool fprobe_is_registered(struct fprobe *fp)
*/
int unregister_fprobe(struct fprobe *fp)
{
int ret;
struct fprobe_hlist *hlist_array;
unsigned long *addrs = NULL;
int ret = 0, i, count;
if (!fprobe_is_registered(fp))
return -EINVAL;
mutex_lock(&fprobe_mutex);
if (!fp || !is_fprobe_still_exist(fp)) {
ret = -EINVAL;
goto out;
}
if (!IS_ERR_OR_NULL(fp->rethook))
rethook_stop(fp->rethook);
hlist_array = fp->hlist_array;
addrs = kcalloc(hlist_array->size, sizeof(unsigned long), GFP_KERNEL);
if (!addrs) {
ret = -ENOMEM; /* TODO: Fallback to one-by-one loop */
goto out;
}
ret = unregister_ftrace_function(&fp->ops);
if (ret < 0)
return ret;
/* Remove non-synonim ips from table and hash */
count = 0;
for (i = 0; i < hlist_array->size; i++) {
if (!delete_fprobe_node(&hlist_array->array[i]))
addrs[count++] = hlist_array->array[i].addr;
}
del_fprobe_hash(fp);
if (!IS_ERR_OR_NULL(fp->rethook))
rethook_free(fp->rethook);
if (count)
fprobe_graph_remove_ips(addrs, count);
ftrace_free_filter(&fp->ops);
kfree_rcu(hlist_array, rcu);
fp->hlist_array = NULL;
out:
mutex_unlock(&fprobe_mutex);
kfree(addrs);
return ret;
}
EXPORT_SYMBOL_GPL(unregister_fprobe);

View File

@ -536,24 +536,21 @@ static int function_stat_show(struct seq_file *m, void *v)
{
struct ftrace_profile *rec = v;
char str[KSYM_SYMBOL_LEN];
int ret = 0;
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
static struct trace_seq s;
unsigned long long avg;
unsigned long long stddev;
#endif
mutex_lock(&ftrace_profile_lock);
guard(mutex)(&ftrace_profile_lock);
/* we raced with function_profile_reset() */
if (unlikely(rec->counter == 0)) {
ret = -EBUSY;
goto out;
}
if (unlikely(rec->counter == 0))
return -EBUSY;
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
avg = div64_ul(rec->time, rec->counter);
if (tracing_thresh && (avg < tracing_thresh))
goto out;
return 0;
#endif
kallsyms_lookup(rec->ip, NULL, NULL, NULL, str);
@ -590,10 +587,8 @@ static int function_stat_show(struct seq_file *m, void *v)
trace_print_seq(m, &s);
#endif
seq_putc(m, '\n');
out:
mutex_unlock(&ftrace_profile_lock);
return ret;
return 0;
}
static void ftrace_profile_reset(struct ftrace_profile_stat *stat)
@ -789,27 +784,24 @@ function_profile_call(unsigned long ip, unsigned long parent_ip,
{
struct ftrace_profile_stat *stat;
struct ftrace_profile *rec;
unsigned long flags;
if (!ftrace_profile_enabled)
return;
local_irq_save(flags);
guard(preempt_notrace)();
stat = this_cpu_ptr(&ftrace_profile_stats);
if (!stat->hash || !ftrace_profile_enabled)
goto out;
return;
rec = ftrace_find_profiled_func(stat, ip);
if (!rec) {
rec = ftrace_profile_alloc(stat, ip);
if (!rec)
goto out;
return;
}
rec->counter++;
out:
local_irq_restore(flags);
}
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
@ -827,7 +819,8 @@ struct profile_fgraph_data {
};
static int profile_graph_entry(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
struct profile_fgraph_data *profile_data;
@ -849,26 +842,27 @@ static int profile_graph_entry(struct ftrace_graph_ent *trace,
}
static void profile_graph_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
struct profile_fgraph_data *profile_data;
struct ftrace_profile_stat *stat;
unsigned long long calltime;
unsigned long long rettime = trace_clock_local();
struct ftrace_profile *rec;
unsigned long flags;
int size;
local_irq_save(flags);
guard(preempt_notrace)();
stat = this_cpu_ptr(&ftrace_profile_stats);
if (!stat->hash || !ftrace_profile_enabled)
goto out;
return;
profile_data = fgraph_retrieve_data(gops->idx, &size);
/* If the calltime was zero'd ignore it */
if (!profile_data || !profile_data->calltime)
goto out;
return;
calltime = rettime - profile_data->calltime;
@ -896,9 +890,6 @@ static void profile_graph_return(struct ftrace_graph_ret *trace,
rec->time += calltime;
rec->time_squared += calltime * calltime;
}
out:
local_irq_restore(flags);
}
static struct fgraph_ops fprofiler_ops = {
@ -946,20 +937,16 @@ ftrace_profile_write(struct file *filp, const char __user *ubuf,
val = !!val;
mutex_lock(&ftrace_profile_lock);
guard(mutex)(&ftrace_profile_lock);
if (ftrace_profile_enabled ^ val) {
if (val) {
ret = ftrace_profile_init();
if (ret < 0) {
cnt = ret;
goto out;
}
if (ret < 0)
return ret;
ret = register_ftrace_profiler();
if (ret < 0) {
cnt = ret;
goto out;
}
if (ret < 0)
return ret;
ftrace_profile_enabled = 1;
} else {
ftrace_profile_enabled = 0;
@ -970,8 +957,6 @@ ftrace_profile_write(struct file *filp, const char __user *ubuf,
unregister_ftrace_profiler();
}
}
out:
mutex_unlock(&ftrace_profile_lock);
*ppos += cnt;
@ -1671,14 +1656,12 @@ unsigned long ftrace_location(unsigned long ip)
loc = ftrace_location_range(ip, ip);
if (!loc) {
if (!kallsyms_lookup_size_offset(ip, &size, &offset))
goto out;
return 0;
/* map sym+0 to __fentry__ */
if (!offset)
loc = ftrace_location_range(ip, ip + size - 1);
}
out:
return loc;
}
@ -2073,7 +2056,7 @@ rollback:
continue;
if (rec == end)
goto err_out;
return -EBUSY;
in_old = !!ftrace_lookup_ip(old_hash, rec->ip);
in_new = !!ftrace_lookup_ip(new_hash, rec->ip);
@ -2086,7 +2069,6 @@ rollback:
rec->flags |= FTRACE_FL_IPMODIFY;
} while_for_each_ftrace_rec();
err_out:
return -EBUSY;
}
@ -4982,10 +4964,6 @@ static int cache_mod(struct trace_array *tr,
return ftrace_add_mod(tr, func, module, enable);
}
static int
ftrace_set_regex(struct ftrace_ops *ops, unsigned char *buf, int len,
int reset, int enable);
#ifdef CONFIG_MODULES
static void process_mod_list(struct list_head *head, struct ftrace_ops *ops,
char *mod, bool enable)
@ -5615,20 +5593,15 @@ static DEFINE_MUTEX(ftrace_cmd_mutex);
__init int register_ftrace_command(struct ftrace_func_command *cmd)
{
struct ftrace_func_command *p;
int ret = 0;
mutex_lock(&ftrace_cmd_mutex);
guard(mutex)(&ftrace_cmd_mutex);
list_for_each_entry(p, &ftrace_commands, list) {
if (strcmp(cmd->name, p->name) == 0) {
ret = -EBUSY;
goto out_unlock;
}
if (strcmp(cmd->name, p->name) == 0)
return -EBUSY;
}
list_add(&cmd->list, &ftrace_commands);
out_unlock:
mutex_unlock(&ftrace_cmd_mutex);
return ret;
return 0;
}
/*
@ -5638,20 +5611,17 @@ __init int register_ftrace_command(struct ftrace_func_command *cmd)
__init int unregister_ftrace_command(struct ftrace_func_command *cmd)
{
struct ftrace_func_command *p, *n;
int ret = -ENODEV;
mutex_lock(&ftrace_cmd_mutex);
guard(mutex)(&ftrace_cmd_mutex);
list_for_each_entry_safe(p, n, &ftrace_commands, list) {
if (strcmp(cmd->name, p->name) == 0) {
ret = 0;
list_del_init(&p->list);
goto out_unlock;
return 0;
}
}
out_unlock:
mutex_unlock(&ftrace_cmd_mutex);
return ret;
return -ENODEV;
}
static int ftrace_process_regex(struct ftrace_iterator *iter,
@ -5661,7 +5631,7 @@ static int ftrace_process_regex(struct ftrace_iterator *iter,
struct trace_array *tr = iter->ops->private;
char *func, *command, *next = buff;
struct ftrace_func_command *p;
int ret = -EINVAL;
int ret;
func = strsep(&next, ":");
@ -5678,17 +5648,14 @@ static int ftrace_process_regex(struct ftrace_iterator *iter,
command = strsep(&next, ":");
mutex_lock(&ftrace_cmd_mutex);
list_for_each_entry(p, &ftrace_commands, list) {
if (strcmp(p->name, command) == 0) {
ret = p->func(tr, hash, func, command, next, enable);
goto out_unlock;
}
}
out_unlock:
mutex_unlock(&ftrace_cmd_mutex);
guard(mutex)(&ftrace_cmd_mutex);
return ret;
list_for_each_entry(p, &ftrace_commands, list) {
if (strcmp(p->name, command) == 0)
return p->func(tr, hash, func, command, next, enable);
}
return -EINVAL;
}
static ssize_t
@ -5722,12 +5689,10 @@ ftrace_regex_write(struct file *file, const char __user *ubuf,
parser->idx, enable);
trace_parser_clear(parser);
if (ret < 0)
goto out;
return ret;
}
ret = read;
out:
return ret;
return read;
}
ssize_t
@ -5788,7 +5753,7 @@ ftrace_match_addr(struct ftrace_hash *hash, unsigned long *ips,
static int
ftrace_set_hash(struct ftrace_ops *ops, unsigned char *buf, int len,
unsigned long *ips, unsigned int cnt,
int remove, int reset, int enable)
int remove, int reset, int enable, char *mod)
{
struct ftrace_hash **orig_hash;
struct ftrace_hash *hash;
@ -5814,7 +5779,15 @@ ftrace_set_hash(struct ftrace_ops *ops, unsigned char *buf, int len,
goto out_regex_unlock;
}
if (buf && !ftrace_match_records(hash, buf, len)) {
if (buf && !match_records(hash, buf, len, mod)) {
/* If this was for a module and nothing was enabled, flag it */
if (mod)
(*orig_hash)->flags |= FTRACE_HASH_FL_MOD;
/*
* Even if it is a mod, return error to let caller know
* nothing was added
*/
ret = -EINVAL;
goto out_regex_unlock;
}
@ -5839,7 +5812,7 @@ static int
ftrace_set_addr(struct ftrace_ops *ops, unsigned long *ips, unsigned int cnt,
int remove, int reset, int enable)
{
return ftrace_set_hash(ops, NULL, 0, ips, cnt, remove, reset, enable);
return ftrace_set_hash(ops, NULL, 0, ips, cnt, remove, reset, enable, NULL);
}
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
@ -6217,7 +6190,38 @@ static int
ftrace_set_regex(struct ftrace_ops *ops, unsigned char *buf, int len,
int reset, int enable)
{
return ftrace_set_hash(ops, buf, len, NULL, 0, 0, reset, enable);
char *mod = NULL, *func, *command, *next = buf;
char *tmp __free(kfree) = NULL;
struct trace_array *tr = ops->private;
int ret;
func = strsep(&next, ":");
/* This can also handle :mod: parsing */
if (next) {
if (!tr)
return -EINVAL;
command = strsep(&next, ":");
if (strcmp(command, "mod") != 0)
return -EINVAL;
mod = next;
len = command - func;
/* Save the original func as ftrace_set_hash() can modify it */
tmp = kstrdup(func, GFP_KERNEL);
}
ret = ftrace_set_hash(ops, func, len, NULL, 0, 0, reset, enable, mod);
if (tr && mod && ret < 0) {
/* Did tmp fail to allocate? */
if (!tmp)
return -ENOMEM;
ret = cache_mod(tr, tmp, mod, enable);
}
return ret;
}
/**
@ -6381,6 +6385,14 @@ ftrace_set_early_filter(struct ftrace_ops *ops, char *buf, int enable)
ftrace_ops_init(ops);
/* The trace_array is needed for caching module function filters */
if (!ops->private) {
struct trace_array *tr = trace_get_global_array();
ops->private = tr;
ftrace_init_trace_array(tr);
}
while (buf) {
func = strsep(&buf, ",");
ftrace_set_regex(ops, func, strlen(func), 0, enable);
@ -7814,9 +7826,14 @@ static void ftrace_update_trampoline(struct ftrace_ops *ops)
void ftrace_init_trace_array(struct trace_array *tr)
{
if (tr->flags & TRACE_ARRAY_FL_MOD_INIT)
return;
INIT_LIST_HEAD(&tr->func_probes);
INIT_LIST_HEAD(&tr->mod_trace);
INIT_LIST_HEAD(&tr->mod_notrace);
tr->flags |= TRACE_ARRAY_FL_MOD_INIT;
}
#else
@ -7845,7 +7862,8 @@ static void ftrace_update_trampoline(struct ftrace_ops *ops)
__init void ftrace_init_global_array_ops(struct trace_array *tr)
{
tr->ops = &global_ops;
tr->ops->private = tr;
if (!global_ops.private)
global_ops.private = tr;
ftrace_init_trace_array(tr);
init_array_fgraph_ops(tr, tr->ops);
}
@ -8287,7 +8305,7 @@ pid_write(struct file *filp, const char __user *ubuf,
if (!cnt)
return 0;
mutex_lock(&ftrace_lock);
guard(mutex)(&ftrace_lock);
switch (type) {
case TRACE_PIDS:
@ -8303,14 +8321,13 @@ pid_write(struct file *filp, const char __user *ubuf,
lockdep_is_held(&ftrace_lock));
break;
default:
ret = -EINVAL;
WARN_ON_ONCE(1);
goto out;
return -EINVAL;
}
ret = trace_pid_write(filtered_pids, &pid_list, ubuf, cnt);
if (ret < 0)
goto out;
return ret;
switch (type) {
case TRACE_PIDS:
@ -8339,11 +8356,8 @@ pid_write(struct file *filp, const char __user *ubuf,
ftrace_update_pid_func();
ftrace_startup_all(0);
out:
mutex_unlock(&ftrace_lock);
if (ret > 0)
*ppos += ret;
*ppos += ret;
return ret;
}
@ -8746,17 +8760,17 @@ static int
ftrace_enable_sysctl(const struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos)
{
int ret = -ENODEV;
int ret;
mutex_lock(&ftrace_lock);
guard(mutex)(&ftrace_lock);
if (unlikely(ftrace_disabled))
goto out;
return -ENODEV;
ret = proc_dointvec(table, write, buffer, lenp, ppos);
if (ret || !write || (last_ftrace_enabled == !!ftrace_enabled))
goto out;
return ret;
if (ftrace_enabled) {
@ -8770,8 +8784,7 @@ ftrace_enable_sysctl(const struct ctl_table *table, int write,
} else {
if (is_permanent_ops_registered()) {
ftrace_enabled = true;
ret = -EBUSY;
goto out;
return -EBUSY;
}
/* stopping ftrace calls (just send to ftrace_stub) */
@ -8781,9 +8794,7 @@ ftrace_enable_sysctl(const struct ctl_table *table, int write,
}
last_ftrace_enabled = !!ftrace_enabled;
out:
mutex_unlock(&ftrace_lock);
return ret;
return 0;
}
static struct ctl_table ftrace_sysctls[] = {

View File

@ -10661,6 +10661,14 @@ out:
return ret;
}
#ifdef CONFIG_FUNCTION_TRACER
/* Used to set module cached ftrace filtering at boot up */
__init struct trace_array *trace_get_global_array(void)
{
return &global_trace;
}
#endif
void __init ftrace_boot_snapshot(void)
{
#ifdef CONFIG_TRACER_MAX_TRACE

View File

@ -432,6 +432,7 @@ struct trace_array {
enum {
TRACE_ARRAY_FL_GLOBAL = BIT(0),
TRACE_ARRAY_FL_BOOT = BIT(1),
TRACE_ARRAY_FL_MOD_INIT = BIT(2),
};
extern struct list_head ftrace_trace_arrays;
@ -693,8 +694,10 @@ void trace_latency_header(struct seq_file *m);
void trace_default_header(struct seq_file *m);
void print_trace_header(struct seq_file *m, struct trace_iterator *iter);
void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops);
int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops);
void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops,
struct ftrace_regs *fregs);
int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops,
struct ftrace_regs *fregs);
void tracing_start_cmdline_record(void);
void tracing_stop_cmdline_record(void);
@ -1112,6 +1115,7 @@ void ftrace_destroy_function_files(struct trace_array *tr);
int ftrace_allocate_ftrace_ops(struct trace_array *tr);
void ftrace_free_ftrace_ops(struct trace_array *tr);
void ftrace_init_global_array_ops(struct trace_array *tr);
struct trace_array *trace_get_global_array(void);
void ftrace_init_array_ops(struct trace_array *tr, ftrace_func_t func);
void ftrace_reset_array_ops(struct trace_array *tr);
void ftrace_init_tracefs(struct trace_array *tr, struct dentry *d_tracer);

View File

@ -134,7 +134,7 @@ static int
process_fetch_insn(struct fetch_insn *code, void *rec, void *edata,
void *dest, void *base)
{
struct pt_regs *regs = rec;
struct ftrace_regs *fregs = rec;
unsigned long val;
int ret;
@ -142,17 +142,17 @@ retry:
/* 1st stage: get value from context */
switch (code->op) {
case FETCH_OP_STACK:
val = regs_get_kernel_stack_nth(regs, code->param);
val = ftrace_regs_get_kernel_stack_nth(fregs, code->param);
break;
case FETCH_OP_STACKP:
val = kernel_stack_pointer(regs);
val = ftrace_regs_get_stack_pointer(fregs);
break;
case FETCH_OP_RETVAL:
val = regs_return_value(regs);
val = ftrace_regs_get_return_value(fregs);
break;
#ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API
case FETCH_OP_ARG:
val = regs_get_kernel_argument(regs, code->param);
val = ftrace_regs_get_argument(fregs, code->param);
break;
case FETCH_OP_EDATA:
val = *(unsigned long *)((unsigned long)edata + code->offset);
@ -175,7 +175,7 @@ NOKPROBE_SYMBOL(process_fetch_insn)
/* function entry handler */
static nokprobe_inline void
__fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
struct pt_regs *regs,
struct ftrace_regs *fregs,
struct trace_event_file *trace_file)
{
struct fentry_trace_entry_head *entry;
@ -189,41 +189,71 @@ __fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
if (trace_trigger_soft_disabled(trace_file))
return;
dsize = __get_data_size(&tf->tp, regs, NULL);
dsize = __get_data_size(&tf->tp, fregs, NULL);
entry = trace_event_buffer_reserve(&fbuffer, trace_file,
sizeof(*entry) + tf->tp.size + dsize);
if (!entry)
return;
fbuffer.regs = regs;
fbuffer.regs = ftrace_get_regs(fregs);
entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event);
entry->ip = entry_ip;
store_trace_args(&entry[1], &tf->tp, regs, NULL, sizeof(*entry), dsize);
store_trace_args(&entry[1], &tf->tp, fregs, NULL, sizeof(*entry), dsize);
trace_event_buffer_commit(&fbuffer);
}
static void
fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
struct pt_regs *regs)
struct ftrace_regs *fregs)
{
struct event_file_link *link;
trace_probe_for_each_link_rcu(link, &tf->tp)
__fentry_trace_func(tf, entry_ip, regs, link->file);
__fentry_trace_func(tf, entry_ip, fregs, link->file);
}
NOKPROBE_SYMBOL(fentry_trace_func);
static nokprobe_inline
void store_fprobe_entry_data(void *edata, struct trace_probe *tp, struct ftrace_regs *fregs)
{
struct probe_entry_arg *earg = tp->entry_arg;
unsigned long val = 0;
int i;
if (!earg)
return;
for (i = 0; i < earg->size; i++) {
struct fetch_insn *code = &earg->code[i];
switch (code->op) {
case FETCH_OP_ARG:
val = ftrace_regs_get_argument(fregs, code->param);
break;
case FETCH_OP_ST_EDATA:
*(unsigned long *)((unsigned long)edata + code->offset) = val;
break;
case FETCH_OP_END:
goto end;
default:
break;
}
}
end:
return;
}
/* function exit handler */
static int trace_fprobe_entry_handler(struct fprobe *fp, unsigned long entry_ip,
unsigned long ret_ip, struct pt_regs *regs,
unsigned long ret_ip, struct ftrace_regs *fregs,
void *entry_data)
{
struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp);
if (tf->tp.entry_arg)
store_trace_entry_data(entry_data, &tf->tp, regs);
store_fprobe_entry_data(entry_data, &tf->tp, fregs);
return 0;
}
@ -231,7 +261,7 @@ NOKPROBE_SYMBOL(trace_fprobe_entry_handler)
static nokprobe_inline void
__fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
unsigned long ret_ip, struct pt_regs *regs,
unsigned long ret_ip, struct ftrace_regs *fregs,
void *entry_data, struct trace_event_file *trace_file)
{
struct fexit_trace_entry_head *entry;
@ -245,60 +275,63 @@ __fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
if (trace_trigger_soft_disabled(trace_file))
return;
dsize = __get_data_size(&tf->tp, regs, entry_data);
dsize = __get_data_size(&tf->tp, fregs, entry_data);
entry = trace_event_buffer_reserve(&fbuffer, trace_file,
sizeof(*entry) + tf->tp.size + dsize);
if (!entry)
return;
fbuffer.regs = regs;
fbuffer.regs = ftrace_get_regs(fregs);
entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event);
entry->func = entry_ip;
entry->ret_ip = ret_ip;
store_trace_args(&entry[1], &tf->tp, regs, entry_data, sizeof(*entry), dsize);
store_trace_args(&entry[1], &tf->tp, fregs, entry_data, sizeof(*entry), dsize);
trace_event_buffer_commit(&fbuffer);
}
static void
fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip,
unsigned long ret_ip, struct pt_regs *regs, void *entry_data)
unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data)
{
struct event_file_link *link;
trace_probe_for_each_link_rcu(link, &tf->tp)
__fexit_trace_func(tf, entry_ip, ret_ip, regs, entry_data, link->file);
__fexit_trace_func(tf, entry_ip, ret_ip, fregs, entry_data, link->file);
}
NOKPROBE_SYMBOL(fexit_trace_func);
#ifdef CONFIG_PERF_EVENTS
static int fentry_perf_func(struct trace_fprobe *tf, unsigned long entry_ip,
struct pt_regs *regs)
struct ftrace_regs *fregs)
{
struct trace_event_call *call = trace_probe_event_call(&tf->tp);
struct fentry_trace_entry_head *entry;
struct hlist_head *head;
int size, __size, dsize;
struct pt_regs *regs;
int rctx;
head = this_cpu_ptr(call->perf_events);
if (hlist_empty(head))
return 0;
dsize = __get_data_size(&tf->tp, regs, NULL);
dsize = __get_data_size(&tf->tp, fregs, NULL);
__size = sizeof(*entry) + tf->tp.size + dsize;
size = ALIGN(__size + sizeof(u32), sizeof(u64));
size -= sizeof(u32);
entry = perf_trace_buf_alloc(size, NULL, &rctx);
entry = perf_trace_buf_alloc(size, &regs, &rctx);
if (!entry)
return 0;
regs = ftrace_fill_perf_regs(fregs, regs);
entry->ip = entry_ip;
memset(&entry[1], 0, dsize);
store_trace_args(&entry[1], &tf->tp, regs, NULL, sizeof(*entry), dsize);
store_trace_args(&entry[1], &tf->tp, fregs, NULL, sizeof(*entry), dsize);
perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs,
head, NULL);
return 0;
@ -307,31 +340,34 @@ NOKPROBE_SYMBOL(fentry_perf_func);
static void
fexit_perf_func(struct trace_fprobe *tf, unsigned long entry_ip,
unsigned long ret_ip, struct pt_regs *regs,
unsigned long ret_ip, struct ftrace_regs *fregs,
void *entry_data)
{
struct trace_event_call *call = trace_probe_event_call(&tf->tp);
struct fexit_trace_entry_head *entry;
struct hlist_head *head;
int size, __size, dsize;
struct pt_regs *regs;
int rctx;
head = this_cpu_ptr(call->perf_events);
if (hlist_empty(head))
return;
dsize = __get_data_size(&tf->tp, regs, entry_data);
dsize = __get_data_size(&tf->tp, fregs, entry_data);
__size = sizeof(*entry) + tf->tp.size + dsize;
size = ALIGN(__size + sizeof(u32), sizeof(u64));
size -= sizeof(u32);
entry = perf_trace_buf_alloc(size, NULL, &rctx);
entry = perf_trace_buf_alloc(size, &regs, &rctx);
if (!entry)
return;
regs = ftrace_fill_perf_regs(fregs, regs);
entry->func = entry_ip;
entry->ret_ip = ret_ip;
store_trace_args(&entry[1], &tf->tp, regs, entry_data, sizeof(*entry), dsize);
store_trace_args(&entry[1], &tf->tp, fregs, entry_data, sizeof(*entry), dsize);
perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs,
head, NULL);
}
@ -339,33 +375,34 @@ NOKPROBE_SYMBOL(fexit_perf_func);
#endif /* CONFIG_PERF_EVENTS */
static int fentry_dispatcher(struct fprobe *fp, unsigned long entry_ip,
unsigned long ret_ip, struct pt_regs *regs,
unsigned long ret_ip, struct ftrace_regs *fregs,
void *entry_data)
{
struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp);
int ret = 0;
if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE))
fentry_trace_func(tf, entry_ip, regs);
fentry_trace_func(tf, entry_ip, fregs);
#ifdef CONFIG_PERF_EVENTS
if (trace_probe_test_flag(&tf->tp, TP_FLAG_PROFILE))
ret = fentry_perf_func(tf, entry_ip, regs);
ret = fentry_perf_func(tf, entry_ip, fregs);
#endif
return ret;
}
NOKPROBE_SYMBOL(fentry_dispatcher);
static void fexit_dispatcher(struct fprobe *fp, unsigned long entry_ip,
unsigned long ret_ip, struct pt_regs *regs,
unsigned long ret_ip, struct ftrace_regs *fregs,
void *entry_data)
{
struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp);
if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE))
fexit_trace_func(tf, entry_ip, ret_ip, regs, entry_data);
fexit_trace_func(tf, entry_ip, ret_ip, fregs, entry_data);
#ifdef CONFIG_PERF_EVENTS
if (trace_probe_test_flag(&tf->tp, TP_FLAG_PROFILE))
fexit_perf_func(tf, entry_ip, ret_ip, regs, entry_data);
fexit_perf_func(tf, entry_ip, ret_ip, fregs, entry_data);
#endif
}
NOKPROBE_SYMBOL(fexit_dispatcher);
@ -379,6 +416,9 @@ static void free_trace_fprobe(struct trace_fprobe *tf)
}
}
/* Since alloc_trace_fprobe() can return error, check the pointer is ERR too. */
DEFINE_FREE(free_trace_fprobe, struct trace_fprobe *, if (!IS_ERR_OR_NULL(_T)) free_trace_fprobe(_T))
/*
* Allocate new trace_probe and initialize it (including fprobe).
*/
@ -387,10 +427,9 @@ static struct trace_fprobe *alloc_trace_fprobe(const char *group,
const char *symbol,
struct tracepoint *tpoint,
struct module *mod,
int maxactive,
int nargs, bool is_return)
{
struct trace_fprobe *tf;
struct trace_fprobe *tf __free(free_trace_fprobe) = NULL;
int ret = -ENOMEM;
tf = kzalloc(struct_size(tf, tp.args, nargs), GFP_KERNEL);
@ -399,7 +438,7 @@ static struct trace_fprobe *alloc_trace_fprobe(const char *group,
tf->symbol = kstrdup(symbol, GFP_KERNEL);
if (!tf->symbol)
goto error;
return ERR_PTR(-ENOMEM);
if (is_return)
tf->fp.exit_handler = fexit_dispatcher;
@ -408,17 +447,13 @@ static struct trace_fprobe *alloc_trace_fprobe(const char *group,
tf->tpoint = tpoint;
tf->mod = mod;
tf->fp.nr_maxactive = maxactive;
ret = trace_probe_init(&tf->tp, event, group, false, nargs);
if (ret < 0)
goto error;
return ERR_PTR(ret);
dyn_event_init(&tf->devent, &trace_fprobe_ops);
return tf;
error:
free_trace_fprobe(tf);
return ERR_PTR(ret);
return_ptr(tf);
}
static struct trace_fprobe *find_trace_fprobe(const char *event,
@ -845,14 +880,12 @@ static int register_trace_fprobe(struct trace_fprobe *tf)
struct trace_fprobe *old_tf;
int ret;
mutex_lock(&event_mutex);
guard(mutex)(&event_mutex);
old_tf = find_trace_fprobe(trace_probe_name(&tf->tp),
trace_probe_group_name(&tf->tp));
if (old_tf) {
ret = append_trace_fprobe(tf, old_tf);
goto end;
}
if (old_tf)
return append_trace_fprobe(tf, old_tf);
/* Register new event */
ret = register_fprobe_event(tf);
@ -862,7 +895,7 @@ static int register_trace_fprobe(struct trace_fprobe *tf)
trace_probe_log_err(0, EVENT_EXIST);
} else
pr_warn("Failed to register probe event(%d)\n", ret);
goto end;
return ret;
}
/* Register fprobe */
@ -872,8 +905,6 @@ static int register_trace_fprobe(struct trace_fprobe *tf)
else
dyn_event_add(&tf->devent, trace_probe_event_call(&tf->tp));
end:
mutex_unlock(&event_mutex);
return ret;
}
@ -1034,7 +1065,10 @@ static int parse_symbol_and_return(int argc, const char *argv[],
return 0;
}
static int __trace_fprobe_create(int argc, const char *argv[])
DEFINE_FREE(module_put, struct module *, if (_T) module_put(_T))
static int trace_fprobe_create_internal(int argc, const char *argv[],
struct traceprobe_parse_context *ctx)
{
/*
* Argument syntax:
@ -1060,24 +1094,20 @@ static int __trace_fprobe_create(int argc, const char *argv[])
* Type of args:
* FETCHARG:TYPE : use TYPE instead of unsigned long.
*/
struct trace_fprobe *tf = NULL;
int i, len, new_argc = 0, ret = 0;
struct trace_fprobe *tf __free(free_trace_fprobe) = NULL;
int i, new_argc = 0, ret = 0;
bool is_return = false;
char *symbol = NULL;
char *symbol __free(kfree) = NULL;
const char *event = NULL, *group = FPROBE_EVENT_SYSTEM;
const char **new_argv = NULL;
int maxactive = 0;
const char **new_argv __free(kfree) = NULL;
char buf[MAX_EVENT_NAME_LEN];
char gbuf[MAX_EVENT_NAME_LEN];
char sbuf[KSYM_NAME_LEN];
char abuf[MAX_BTF_ARGS_LEN];
char *dbuf = NULL;
char *dbuf __free(kfree) = NULL;
bool is_tracepoint = false;
struct module *tp_mod = NULL;
struct module *tp_mod __free(module_put) = NULL;
struct tracepoint *tpoint = NULL;
struct traceprobe_parse_context ctx = {
.flags = TPARG_FL_KERNEL | TPARG_FL_FPROBE,
};
if ((argv[0][0] != 'f' && argv[0][0] != 't') || argc < 2)
return -ECANCELED;
@ -1087,35 +1117,13 @@ static int __trace_fprobe_create(int argc, const char *argv[])
group = TRACEPOINT_EVENT_SYSTEM;
}
trace_probe_log_init("trace_fprobe", argc, argv);
event = strchr(&argv[0][1], ':');
if (event)
event++;
if (isdigit(argv[0][1])) {
if (event)
len = event - &argv[0][1] - 1;
else
len = strlen(&argv[0][1]);
if (len > MAX_EVENT_NAME_LEN - 1) {
if (argv[0][1] != '\0') {
if (argv[0][1] != ':') {
trace_probe_log_set_index(0);
trace_probe_log_err(1, BAD_MAXACT);
goto parse_error;
}
memcpy(buf, &argv[0][1], len);
buf[len] = '\0';
ret = kstrtouint(buf, 0, &maxactive);
if (ret || !maxactive) {
trace_probe_log_err(1, BAD_MAXACT);
goto parse_error;
}
/* fprobe rethook instances are iterated over via a list. The
* maximum should stay reasonable.
*/
if (maxactive > RETHOOK_MAXACTIVE_MAX) {
trace_probe_log_err(1, MAXACT_TOO_BIG);
goto parse_error;
return -EINVAL;
}
event = &argv[0][2];
}
trace_probe_log_set_index(1);
@ -1123,20 +1131,14 @@ static int __trace_fprobe_create(int argc, const char *argv[])
/* a symbol(or tracepoint) must be specified */
ret = parse_symbol_and_return(argc, argv, &symbol, &is_return, is_tracepoint);
if (ret < 0)
goto parse_error;
if (!is_return && maxactive) {
trace_probe_log_set_index(0);
trace_probe_log_err(1, BAD_MAXACT_TYPE);
goto parse_error;
}
return -EINVAL;
trace_probe_log_set_index(0);
if (event) {
ret = traceprobe_parse_event_name(&event, &group, gbuf,
event - argv[0]);
if (ret)
goto parse_error;
return -EINVAL;
}
if (!event) {
@ -1152,67 +1154,62 @@ static int __trace_fprobe_create(int argc, const char *argv[])
}
if (is_return)
ctx.flags |= TPARG_FL_RETURN;
ctx->flags |= TPARG_FL_RETURN;
else
ctx.flags |= TPARG_FL_FENTRY;
ctx->flags |= TPARG_FL_FENTRY;
if (is_tracepoint) {
ctx.flags |= TPARG_FL_TPOINT;
ctx->flags |= TPARG_FL_TPOINT;
tpoint = find_tracepoint(symbol, &tp_mod);
if (tpoint) {
ctx.funcname = kallsyms_lookup(
ctx->funcname = kallsyms_lookup(
(unsigned long)tpoint->probestub,
NULL, NULL, NULL, sbuf);
} else if (IS_ENABLED(CONFIG_MODULES)) {
/* This *may* be loaded afterwards */
tpoint = TRACEPOINT_STUB;
ctx.funcname = symbol;
ctx->funcname = symbol;
} else {
trace_probe_log_set_index(1);
trace_probe_log_err(0, NO_TRACEPOINT);
goto parse_error;
return -EINVAL;
}
} else
ctx.funcname = symbol;
ctx->funcname = symbol;
argc -= 2; argv += 2;
new_argv = traceprobe_expand_meta_args(argc, argv, &new_argc,
abuf, MAX_BTF_ARGS_LEN, &ctx);
if (IS_ERR(new_argv)) {
ret = PTR_ERR(new_argv);
new_argv = NULL;
goto out;
}
abuf, MAX_BTF_ARGS_LEN, ctx);
if (IS_ERR(new_argv))
return PTR_ERR(new_argv);
if (new_argv) {
argc = new_argc;
argv = new_argv;
}
if (argc > MAX_TRACE_ARGS) {
ret = -E2BIG;
goto out;
}
if (argc > MAX_TRACE_ARGS)
return -E2BIG;
ret = traceprobe_expand_dentry_args(argc, argv, &dbuf);
if (ret)
goto out;
return ret;
/* setup a probe */
tf = alloc_trace_fprobe(group, event, symbol, tpoint, tp_mod,
maxactive, argc, is_return);
argc, is_return);
if (IS_ERR(tf)) {
ret = PTR_ERR(tf);
/* This must return -ENOMEM, else there is a bug */
WARN_ON_ONCE(ret != -ENOMEM);
goto out; /* We know tf is not allocated */
return ret;
}
/* parse arguments */
for (i = 0; i < argc; i++) {
trace_probe_log_set_index(i + 2);
ctx.offset = 0;
ret = traceprobe_parse_probe_arg(&tf->tp, i, argv[i], &ctx);
ctx->offset = 0;
ret = traceprobe_parse_probe_arg(&tf->tp, i, argv[i], ctx);
if (ret)
goto error; /* This can be -ENOMEM */
return ret; /* This can be -ENOMEM */
}
if (is_return && tf->tp.entry_arg) {
@ -1223,7 +1220,7 @@ static int __trace_fprobe_create(int argc, const char *argv[])
ret = traceprobe_set_print_fmt(&tf->tp,
is_return ? PROBE_PRINT_RETURN : PROBE_PRINT_NORMAL);
if (ret < 0)
goto error;
return ret;
ret = register_trace_fprobe(tf);
if (ret) {
@ -1234,29 +1231,32 @@ static int __trace_fprobe_create(int argc, const char *argv[])
trace_probe_log_err(0, BAD_PROBE_ADDR);
else if (ret != -ENOMEM && ret != -EEXIST)
trace_probe_log_err(0, FAIL_REG_PROBE);
goto error;
return -EINVAL;
}
out:
if (tp_mod)
module_put(tp_mod);
/* 'tf' is successfully registered. To avoid freeing, assign NULL. */
tf = NULL;
return 0;
}
static int trace_fprobe_create_cb(int argc, const char *argv[])
{
struct traceprobe_parse_context ctx = {
.flags = TPARG_FL_KERNEL | TPARG_FL_FPROBE,
};
int ret;
trace_probe_log_init("trace_fprobe", argc, argv);
ret = trace_fprobe_create_internal(argc, argv, &ctx);
traceprobe_finish_parse(&ctx);
trace_probe_log_clear();
kfree(new_argv);
kfree(symbol);
kfree(dbuf);
return ret;
parse_error:
ret = -EINVAL;
error:
free_trace_fprobe(tf);
goto out;
}
static int trace_fprobe_create(const char *raw_command)
{
return trace_probe_create(raw_command, __trace_fprobe_create);
return trace_probe_create(raw_command, trace_fprobe_create_cb);
}
static int trace_fprobe_release(struct dyn_event *ev)
@ -1278,8 +1278,6 @@ static int trace_fprobe_show(struct seq_file *m, struct dyn_event *ev)
seq_putc(m, 't');
else
seq_putc(m, 'f');
if (trace_fprobe_is_return(tf) && tf->fp.nr_maxactive)
seq_printf(m, "%d", tf->fp.nr_maxactive);
seq_printf(m, ":%s/%s", trace_probe_group_name(&tf->tp),
trace_probe_name(&tf->tp));

View File

@ -175,16 +175,16 @@ struct fgraph_times {
};
int trace_graph_entry(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
unsigned long *task_var = fgraph_get_task_var(gops);
struct trace_array *tr = gops->private;
struct trace_array_cpu *data;
struct fgraph_times *ftimes;
unsigned long flags;
unsigned int trace_ctx;
long disabled;
int ret;
int ret = 0;
int cpu;
if (*task_var & TRACE_GRAPH_NOTRACE)
@ -235,25 +235,21 @@ int trace_graph_entry(struct ftrace_graph_ent *trace,
if (tracing_thresh)
return 1;
local_irq_save(flags);
preempt_disable_notrace();
cpu = raw_smp_processor_id();
data = per_cpu_ptr(tr->array_buffer.data, cpu);
disabled = atomic_inc_return(&data->disabled);
if (likely(disabled == 1)) {
trace_ctx = tracing_gen_ctx_flags(flags);
if (unlikely(IS_ENABLED(CONFIG_FUNCTION_GRAPH_RETADDR) &&
tracer_flags_is_set(TRACE_GRAPH_PRINT_RETADDR))) {
disabled = atomic_read(&data->disabled);
if (likely(!disabled)) {
trace_ctx = tracing_gen_ctx();
if (IS_ENABLED(CONFIG_FUNCTION_GRAPH_RETADDR) &&
tracer_flags_is_set(TRACE_GRAPH_PRINT_RETADDR)) {
unsigned long retaddr = ftrace_graph_top_ret_addr(current);
ret = __trace_graph_retaddr_entry(tr, trace, trace_ctx, retaddr);
} else
} else {
ret = __trace_graph_entry(tr, trace, trace_ctx);
} else {
ret = 0;
}
}
atomic_dec(&data->disabled);
local_irq_restore(flags);
preempt_enable_notrace();
return ret;
}
@ -314,13 +310,12 @@ static void handle_nosleeptime(struct ftrace_graph_ret *trace,
}
void trace_graph_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops, struct ftrace_regs *fregs)
{
unsigned long *task_var = fgraph_get_task_var(gops);
struct trace_array *tr = gops->private;
struct trace_array_cpu *data;
struct fgraph_times *ftimes;
unsigned long flags;
unsigned int trace_ctx;
long disabled;
int size;
@ -341,20 +336,20 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
trace->calltime = ftimes->calltime;
local_irq_save(flags);
preempt_disable_notrace();
cpu = raw_smp_processor_id();
data = per_cpu_ptr(tr->array_buffer.data, cpu);
disabled = atomic_inc_return(&data->disabled);
if (likely(disabled == 1)) {
trace_ctx = tracing_gen_ctx_flags(flags);
disabled = atomic_read(&data->disabled);
if (likely(!disabled)) {
trace_ctx = tracing_gen_ctx();
__trace_graph_return(tr, trace, trace_ctx);
}
atomic_dec(&data->disabled);
local_irq_restore(flags);
preempt_enable_notrace();
}
static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
struct fgraph_times *ftimes;
int size;
@ -378,7 +373,7 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
(trace->rettime - ftimes->calltime < tracing_thresh))
return;
else
trace_graph_return(trace, gops);
trace_graph_return(trace, gops, fregs);
}
static struct fgraph_ops funcgraph_ops = {

View File

@ -176,7 +176,8 @@ static int irqsoff_display_graph(struct trace_array *tr, int set)
}
static int irqsoff_graph_entry(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
struct trace_array *tr = irqsoff_trace;
struct trace_array_cpu *data;
@ -214,7 +215,8 @@ static int irqsoff_graph_entry(struct ftrace_graph_ent *trace,
}
static void irqsoff_graph_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
struct trace_array *tr = irqsoff_trace;
struct trace_array_cpu *data;

View File

@ -232,7 +232,7 @@ array:
/* Sum up total data length for dynamic arrays (strings) */
static nokprobe_inline int
__get_data_size(struct trace_probe *tp, struct pt_regs *regs, void *edata)
__get_data_size(struct trace_probe *tp, void *regs, void *edata)
{
struct probe_arg *arg;
int i, len, ret = 0;

View File

@ -113,7 +113,8 @@ static int wakeup_display_graph(struct trace_array *tr, int set)
}
static int wakeup_graph_entry(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
struct trace_array *tr = wakeup_trace;
struct trace_array_cpu *data;
@ -150,7 +151,8 @@ static int wakeup_graph_entry(struct ftrace_graph_ent *trace,
}
static void wakeup_graph_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
struct trace_array *tr = wakeup_trace;
struct trace_array_cpu *data;

View File

@ -774,7 +774,8 @@ struct fgraph_fixture {
};
static __init int store_entry(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
struct fgraph_fixture *fixture = container_of(gops, struct fgraph_fixture, gops);
const char *type = fixture->store_type_name;
@ -807,7 +808,8 @@ static __init int store_entry(struct ftrace_graph_ent *trace,
}
static __init void store_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
struct fgraph_fixture *fixture = container_of(gops, struct fgraph_fixture, gops);
const char *type = fixture->store_type_name;
@ -1025,7 +1027,8 @@ static unsigned int graph_hang_thresh;
/* Wrap the real function entry probe to avoid possible hanging */
static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
struct fgraph_ops *gops,
struct ftrace_regs *fregs)
{
/* This is harmlessly racy, we want to approximately detect a hang */
if (unlikely(++graph_hang_thresh > GRAPH_MAX_FUNC_TEST)) {
@ -1039,7 +1042,7 @@ static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace,
return 0;
}
return trace_graph_entry(trace, gops);
return trace_graph_entry(trace, gops, fregs);
}
static struct fgraph_ops fgraph_ops __initdata = {

View File

@ -17,10 +17,8 @@ static u32 rand1, entry_val, exit_val;
/* Use indirect calls to avoid inlining the target functions */
static u32 (*target)(u32 value);
static u32 (*target2)(u32 value);
static u32 (*target_nest)(u32 value, u32 (*nest)(u32));
static unsigned long target_ip;
static unsigned long target2_ip;
static unsigned long target_nest_ip;
static int entry_return_value;
static noinline u32 fprobe_selftest_target(u32 value)
@ -33,14 +31,9 @@ static noinline u32 fprobe_selftest_target2(u32 value)
return (value / div_factor) + 1;
}
static noinline u32 fprobe_selftest_nest_target(u32 value, u32 (*nest)(u32))
{
return nest(value + 2);
}
static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip,
unsigned long ret_ip,
struct pt_regs *regs, void *data)
struct ftrace_regs *fregs, void *data)
{
KUNIT_EXPECT_FALSE(current_test, preemptible());
/* This can be called on the fprobe_selftest_target and the fprobe_selftest_target2 */
@ -59,9 +52,9 @@ static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip,
static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip,
unsigned long ret_ip,
struct pt_regs *regs, void *data)
struct ftrace_regs *fregs, void *data)
{
unsigned long ret = regs_return_value(regs);
unsigned long ret = ftrace_regs_get_return_value(fregs);
KUNIT_EXPECT_FALSE(current_test, preemptible());
if (ip != target_ip) {
@ -79,22 +72,6 @@ static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip,
KUNIT_EXPECT_NULL(current_test, data);
}
static notrace int nest_entry_handler(struct fprobe *fp, unsigned long ip,
unsigned long ret_ip,
struct pt_regs *regs, void *data)
{
KUNIT_EXPECT_FALSE(current_test, preemptible());
return 0;
}
static notrace void nest_exit_handler(struct fprobe *fp, unsigned long ip,
unsigned long ret_ip,
struct pt_regs *regs, void *data)
{
KUNIT_EXPECT_FALSE(current_test, preemptible());
KUNIT_EXPECT_EQ(current_test, ip, target_nest_ip);
}
/* Test entry only (no rethook) */
static void test_fprobe_entry(struct kunit *test)
{
@ -191,25 +168,6 @@ static void test_fprobe_data(struct kunit *test)
KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp));
}
/* Test nr_maxactive */
static void test_fprobe_nest(struct kunit *test)
{
static const char *syms[] = {"fprobe_selftest_target", "fprobe_selftest_nest_target"};
struct fprobe fp = {
.entry_handler = nest_entry_handler,
.exit_handler = nest_exit_handler,
.nr_maxactive = 1,
};
current_test = test;
KUNIT_EXPECT_EQ(test, 0, register_fprobe_syms(&fp, syms, 2));
target_nest(rand1, target);
KUNIT_EXPECT_EQ(test, 1, fp.nmissed);
KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp));
}
static void test_fprobe_skip(struct kunit *test)
{
struct fprobe fp = {
@ -247,10 +205,8 @@ static int fprobe_test_init(struct kunit *test)
rand1 = get_random_u32_above(div_factor);
target = fprobe_selftest_target;
target2 = fprobe_selftest_target2;
target_nest = fprobe_selftest_nest_target;
target_ip = get_ftrace_location(target);
target2_ip = get_ftrace_location(target2);
target_nest_ip = get_ftrace_location(target_nest);
return 0;
}
@ -260,7 +216,6 @@ static struct kunit_case fprobe_testcases[] = {
KUNIT_CASE(test_fprobe),
KUNIT_CASE(test_fprobe_syms),
KUNIT_CASE(test_fprobe_data),
KUNIT_CASE(test_fprobe_nest),
KUNIT_CASE(test_fprobe_skip),
{}
};

View File

@ -50,7 +50,7 @@ static void show_backtrace(void)
static int sample_entry_handler(struct fprobe *fp, unsigned long ip,
unsigned long ret_ip,
struct pt_regs *regs, void *data)
struct ftrace_regs *fregs, void *data)
{
if (use_trace)
/*
@ -67,7 +67,7 @@ static int sample_entry_handler(struct fprobe *fp, unsigned long ip,
}
static void sample_exit_handler(struct fprobe *fp, unsigned long ip,
unsigned long ret_ip, struct pt_regs *regs,
unsigned long ret_ip, struct ftrace_regs *regs,
void *data)
{
unsigned long rip = ret_ip;

View File

@ -0,0 +1,19 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: Generic dynamic event - Repeating add/remove fprobe events
# requires: dynamic_events "f[:[<group>/][<event>]] <func-name>[%return] [<args>]":README
echo 0 > events/enable
echo > dynamic_events
PLACE=$FUNCTION_FORK
REPEAT_TIMES=64
for i in `seq 1 $REPEAT_TIMES`; do
echo "f:myevent $PLACE" >> dynamic_events
grep -q myevent dynamic_events
test -d events/fprobes/myevent
echo > dynamic_events
done
clear_trace

View File

@ -16,9 +16,7 @@ aarch64)
REG=%r0 ;;
esac
check_error 'f^100 vfs_read' # MAXACT_NO_KPROBE
check_error 'f^1a111 vfs_read' # BAD_MAXACT
check_error 'f^100000 vfs_read' # MAXACT_TOO_BIG
check_error 'f^100 vfs_read' # BAD_MAXACT
check_error 'f ^non_exist_func' # BAD_PROBE_ADDR (enoent)
check_error 'f ^vfs_read+10' # BAD_PROBE_ADDR