ftrace: Move ARCH_SUPPORTS_FTRACE_SAVE_REGS in Kconfig
[deliverable/linux.git] / kernel / trace / Kconfig
1 #
2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
4 #
5
6 config USER_STACKTRACE_SUPPORT
7 bool
8
9 config NOP_TRACER
10 bool
11
12 config HAVE_FTRACE_NMI_ENTER
13 bool
14 help
15 See Documentation/trace/ftrace-design.txt
16
17 config HAVE_FUNCTION_TRACER
18 bool
19 help
20 See Documentation/trace/ftrace-design.txt
21
22 config HAVE_FUNCTION_GRAPH_TRACER
23 bool
24 help
25 See Documentation/trace/ftrace-design.txt
26
27 config HAVE_FUNCTION_GRAPH_FP_TEST
28 bool
29 help
30 See Documentation/trace/ftrace-design.txt
31
32 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
33 bool
34 help
35 See Documentation/trace/ftrace-design.txt
36
37 config HAVE_DYNAMIC_FTRACE
38 bool
39 help
40 See Documentation/trace/ftrace-design.txt
41
42 config HAVE_DYNAMIC_FTRACE_WITH_REGS
43 bool
44
45 config HAVE_FTRACE_MCOUNT_RECORD
46 bool
47 help
48 See Documentation/trace/ftrace-design.txt
49
50 config HAVE_SYSCALL_TRACEPOINTS
51 bool
52 help
53 See Documentation/trace/ftrace-design.txt
54
55 config HAVE_FENTRY
56 bool
57 help
58 Arch supports the gcc options -pg with -mfentry
59
60 config HAVE_C_RECORDMCOUNT
61 bool
62 help
63 C version of recordmcount available?
64
65 config TRACER_MAX_TRACE
66 bool
67
68 config TRACE_CLOCK
69 bool
70
71 config RING_BUFFER
72 bool
73 select TRACE_CLOCK
74
75 config FTRACE_NMI_ENTER
76 bool
77 depends on HAVE_FTRACE_NMI_ENTER
78 default y
79
80 config EVENT_TRACING
81 select CONTEXT_SWITCH_TRACER
82 bool
83
84 config EVENT_POWER_TRACING_DEPRECATED
85 depends on EVENT_TRACING
86 bool "Deprecated power event trace API, to be removed"
87 default y
88 help
89 Provides old power event types:
90 C-state/idle accounting events:
91 power:power_start
92 power:power_end
93 and old cpufreq accounting event:
94 power:power_frequency
95 This is for userspace compatibility
96 and will vanish after 5 kernel iterations,
97 namely 3.1.
98
99 config CONTEXT_SWITCH_TRACER
100 bool
101
102 config RING_BUFFER_ALLOW_SWAP
103 bool
104 help
105 Allow the use of ring_buffer_swap_cpu.
106 Adds a very slight overhead to tracing when enabled.
107
108 # All tracer options should select GENERIC_TRACER. For those options that are
109 # enabled by all tracers (context switch and event tracer) they select TRACING.
110 # This allows those options to appear when no other tracer is selected. But the
111 # options do not appear when something else selects it. We need the two options
112 # GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the
113 # hiding of the automatic options.
114
115 config TRACING
116 bool
117 select DEBUG_FS
118 select RING_BUFFER
119 select STACKTRACE if STACKTRACE_SUPPORT
120 select TRACEPOINTS
121 select NOP_TRACER
122 select BINARY_PRINTF
123 select EVENT_TRACING
124 select TRACE_CLOCK
125 select IRQ_WORK
126
127 config GENERIC_TRACER
128 bool
129 select TRACING
130
131 #
132 # Minimum requirements an architecture has to meet for us to
133 # be able to offer generic tracing facilities:
134 #
135 config TRACING_SUPPORT
136 bool
137 # PPC32 has no irqflags tracing support, but it can use most of the
138 # tracers anyway, they were tested to build and work. Note that new
139 # exceptions to this list aren't welcomed, better implement the
140 # irqflags tracing for your architecture.
141 depends on TRACE_IRQFLAGS_SUPPORT || PPC32
142 depends on STACKTRACE_SUPPORT
143 default y
144
145 if TRACING_SUPPORT
146
147 menuconfig FTRACE
148 bool "Tracers"
149 default y if DEBUG_KERNEL
150 help
151 Enable the kernel tracing infrastructure.
152
153 if FTRACE
154
155 config FUNCTION_TRACER
156 bool "Kernel Function Tracer"
157 depends on HAVE_FUNCTION_TRACER
158 select KALLSYMS
159 select GENERIC_TRACER
160 select CONTEXT_SWITCH_TRACER
161 help
162 Enable the kernel to trace every kernel function. This is done
163 by using a compiler feature to insert a small, 5-byte No-Operation
164 instruction at the beginning of every kernel function, which NOP
165 sequence is then dynamically patched into a tracer call when
166 tracing is enabled by the administrator. If it's runtime disabled
167 (the bootup default), then the overhead of the instructions is very
168 small and not measurable even in micro-benchmarks.
169
170 config FUNCTION_GRAPH_TRACER
171 bool "Kernel Function Graph Tracer"
172 depends on HAVE_FUNCTION_GRAPH_TRACER
173 depends on FUNCTION_TRACER
174 depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE
175 default y
176 help
177 Enable the kernel to trace a function at both its return
178 and its entry.
179 Its first purpose is to trace the duration of functions and
180 draw a call graph for each thread with some information like
181 the return value. This is done by setting the current return
182 address on the current task structure into a stack of calls.
183
184
185 config IRQSOFF_TRACER
186 bool "Interrupts-off Latency Tracer"
187 default n
188 depends on TRACE_IRQFLAGS_SUPPORT
189 depends on !ARCH_USES_GETTIMEOFFSET
190 select TRACE_IRQFLAGS
191 select GENERIC_TRACER
192 select TRACER_MAX_TRACE
193 select RING_BUFFER_ALLOW_SWAP
194 help
195 This option measures the time spent in irqs-off critical
196 sections, with microsecond accuracy.
197
198 The default measurement method is a maximum search, which is
199 disabled by default and can be runtime (re-)started
200 via:
201
202 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
203
204 (Note that kernel size and overhead increase with this option
205 enabled. This option and the preempt-off timing option can be
206 used together or separately.)
207
208 config PREEMPT_TRACER
209 bool "Preemption-off Latency Tracer"
210 default n
211 depends on !ARCH_USES_GETTIMEOFFSET
212 depends on PREEMPT
213 select GENERIC_TRACER
214 select TRACER_MAX_TRACE
215 select RING_BUFFER_ALLOW_SWAP
216 help
217 This option measures the time spent in preemption-off critical
218 sections, with microsecond accuracy.
219
220 The default measurement method is a maximum search, which is
221 disabled by default and can be runtime (re-)started
222 via:
223
224 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
225
226 (Note that kernel size and overhead increase with this option
227 enabled. This option and the irqs-off timing option can be
228 used together or separately.)
229
230 config SCHED_TRACER
231 bool "Scheduling Latency Tracer"
232 select GENERIC_TRACER
233 select CONTEXT_SWITCH_TRACER
234 select TRACER_MAX_TRACE
235 help
236 This tracer tracks the latency of the highest priority task
237 to be scheduled in, starting from the point it has woken up.
238
239 config ENABLE_DEFAULT_TRACERS
240 bool "Trace process context switches and events"
241 depends on !GENERIC_TRACER
242 select TRACING
243 help
244 This tracer hooks to various trace points in the kernel,
245 allowing the user to pick and choose which trace point they
246 want to trace. It also includes the sched_switch tracer plugin.
247
248 config FTRACE_SYSCALLS
249 bool "Trace syscalls"
250 depends on HAVE_SYSCALL_TRACEPOINTS
251 select GENERIC_TRACER
252 select KALLSYMS
253 help
254 Basic tracer to catch the syscall entry and exit events.
255
256 config TRACE_BRANCH_PROFILING
257 bool
258 select GENERIC_TRACER
259
260 choice
261 prompt "Branch Profiling"
262 default BRANCH_PROFILE_NONE
263 help
264 The branch profiling is a software profiler. It will add hooks
265 into the C conditionals to test which path a branch takes.
266
267 The likely/unlikely profiler only looks at the conditions that
268 are annotated with a likely or unlikely macro.
269
270 The "all branch" profiler will profile every if-statement in the
271 kernel. This profiler will also enable the likely/unlikely
272 profiler.
273
274 Either of the above profilers adds a bit of overhead to the system.
275 If unsure, choose "No branch profiling".
276
277 config BRANCH_PROFILE_NONE
278 bool "No branch profiling"
279 help
280 No branch profiling. Branch profiling adds a bit of overhead.
281 Only enable it if you want to analyse the branching behavior.
282 Otherwise keep it disabled.
283
284 config PROFILE_ANNOTATED_BRANCHES
285 bool "Trace likely/unlikely profiler"
286 select TRACE_BRANCH_PROFILING
287 help
288 This tracer profiles all likely and unlikely macros
289 in the kernel. It will display the results in:
290
291 /sys/kernel/debug/tracing/trace_stat/branch_annotated
292
293 Note: this will add a significant overhead; only turn this
294 on if you need to profile the system's use of these macros.
295
296 config PROFILE_ALL_BRANCHES
297 bool "Profile all if conditionals"
298 select TRACE_BRANCH_PROFILING
299 help
300 This tracer profiles all branch conditions. Every if ()
301 taken in the kernel is recorded whether it hit or miss.
302 The results will be displayed in:
303
304 /sys/kernel/debug/tracing/trace_stat/branch_all
305
306 This option also enables the likely/unlikely profiler.
307
308 This configuration, when enabled, will impose a great overhead
309 on the system. This should only be enabled when the system
310 is to be analyzed in much detail.
311 endchoice
312
313 config TRACING_BRANCHES
314 bool
315 help
316 Selected by tracers that will trace the likely and unlikely
317 conditions. This prevents the tracers themselves from being
318 profiled. Profiling the tracing infrastructure can only happen
319 when the likelys and unlikelys are not being traced.
320
321 config BRANCH_TRACER
322 bool "Trace likely/unlikely instances"
323 depends on TRACE_BRANCH_PROFILING
324 select TRACING_BRANCHES
325 help
326 This traces the events of likely and unlikely condition
327 calls in the kernel. The difference between this and the
328 "Trace likely/unlikely profiler" is that this is not a
329 histogram of the callers, but actually places the calling
330 events into a running trace buffer to see when and where the
331 events happened, as well as their results.
332
333 Say N if unsure.
334
335 config STACK_TRACER
336 bool "Trace max stack"
337 depends on HAVE_FUNCTION_TRACER
338 select FUNCTION_TRACER
339 select STACKTRACE
340 select KALLSYMS
341 help
342 This special tracer records the maximum stack footprint of the
343 kernel and displays it in /sys/kernel/debug/tracing/stack_trace.
344
345 This tracer works by hooking into every function call that the
346 kernel executes, and keeping a maximum stack depth value and
347 stack-trace saved. If this is configured with DYNAMIC_FTRACE
348 then it will not have any overhead while the stack tracer
349 is disabled.
350
351 To enable the stack tracer on bootup, pass in 'stacktrace'
352 on the kernel command line.
353
354 The stack tracer can also be enabled or disabled via the
355 sysctl kernel.stack_tracer_enabled
356
357 Say N if unsure.
358
359 config BLK_DEV_IO_TRACE
360 bool "Support for tracing block IO actions"
361 depends on SYSFS
362 depends on BLOCK
363 select RELAY
364 select DEBUG_FS
365 select TRACEPOINTS
366 select GENERIC_TRACER
367 select STACKTRACE
368 help
369 Say Y here if you want to be able to trace the block layer actions
370 on a given queue. Tracing allows you to see any traffic happening
371 on a block device queue. For more information (and the userspace
372 support tools needed), fetch the blktrace tools from:
373
374 git://git.kernel.dk/blktrace.git
375
376 Tracing also is possible using the ftrace interface, e.g.:
377
378 echo 1 > /sys/block/sda/sda1/trace/enable
379 echo blk > /sys/kernel/debug/tracing/current_tracer
380 cat /sys/kernel/debug/tracing/trace_pipe
381
382 If unsure, say N.
383
384 config KPROBE_EVENT
385 depends on KPROBES
386 depends on HAVE_REGS_AND_STACK_ACCESS_API
387 bool "Enable kprobes-based dynamic events"
388 select TRACING
389 select PROBE_EVENTS
390 default y
391 help
392 This allows the user to add tracing events (similar to tracepoints)
393 on the fly via the ftrace interface. See
394 Documentation/trace/kprobetrace.txt for more details.
395
396 Those events can be inserted wherever kprobes can probe, and record
397 various register and memory values.
398
399 This option is also required by perf-probe subcommand of perf tools.
400 If you want to use perf tools, this option is strongly recommended.
401
402 config UPROBE_EVENT
403 bool "Enable uprobes-based dynamic events"
404 depends on ARCH_SUPPORTS_UPROBES
405 depends on MMU
406 select UPROBES
407 select PROBE_EVENTS
408 select TRACING
409 default n
410 help
411 This allows the user to add tracing events on top of userspace
412 dynamic events (similar to tracepoints) on the fly via the trace
413 events interface. Those events can be inserted wherever uprobes
414 can probe, and record various registers.
415 This option is required if you plan to use perf-probe subcommand
416 of perf tools on user space applications.
417
418 config PROBE_EVENTS
419 def_bool n
420
421 config DYNAMIC_FTRACE
422 bool "enable/disable ftrace tracepoints dynamically"
423 depends on FUNCTION_TRACER
424 depends on HAVE_DYNAMIC_FTRACE
425 default y
426 help
427 This option will modify all the calls to ftrace dynamically
428 (will patch them out of the binary image and replace them
429 with a No-Op instruction) as they are called. A table is
430 created to dynamically enable them again.
431
432 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but
433 otherwise has native performance as long as no tracing is active.
434
435 The changes to the code are done by a kernel thread that
436 wakes up once a second and checks to see if any ftrace calls
437 were made. If so, it runs stop_machine (stops all CPUS)
438 and modifies the code to jump over the call to ftrace.
439
440 config DYNAMIC_FTRACE_WITH_REGS
441 def_bool y
442 depends on DYNAMIC_FTRACE
443 depends on HAVE_DYNAMIC_FTRACE_WITH_REGS
444
445 config FUNCTION_PROFILER
446 bool "Kernel function profiler"
447 depends on FUNCTION_TRACER
448 default n
449 help
450 This option enables the kernel function profiler. A file is created
451 in debugfs called function_profile_enabled which defaults to zero.
452 When a 1 is echoed into this file profiling begins, and when a
453 zero is entered, profiling stops. A "functions" file is created in
454 the trace_stats directory; this file shows the list of functions that
455 have been hit and their counters.
456
457 If in doubt, say N.
458
459 config FTRACE_MCOUNT_RECORD
460 def_bool y
461 depends on DYNAMIC_FTRACE
462 depends on HAVE_FTRACE_MCOUNT_RECORD
463
464 config FTRACE_SELFTEST
465 bool
466
467 config FTRACE_STARTUP_TEST
468 bool "Perform a startup test on ftrace"
469 depends on GENERIC_TRACER
470 select FTRACE_SELFTEST
471 help
472 This option performs a series of startup tests on ftrace. On bootup
473 a series of tests are made to verify that the tracer is
474 functioning properly. It will do tests on all the configured
475 tracers of ftrace.
476
477 config EVENT_TRACE_TEST_SYSCALLS
478 bool "Run selftest on syscall events"
479 depends on FTRACE_STARTUP_TEST
480 help
481 This option will also enable testing every syscall event.
482 It only enables the event and disables it and runs various loads
483 with the event enabled. This adds a bit more time for kernel boot
484 up since it runs this on every system call defined.
485
486 TBD - enable a way to actually call the syscalls as we test their
487 events
488
489 config MMIOTRACE
490 bool "Memory mapped IO tracing"
491 depends on HAVE_MMIOTRACE_SUPPORT && PCI
492 select GENERIC_TRACER
493 help
494 Mmiotrace traces Memory Mapped I/O access and is meant for
495 debugging and reverse engineering. It is called from the ioremap
496 implementation and works via page faults. Tracing is disabled by
497 default and can be enabled at run-time.
498
499 See Documentation/trace/mmiotrace.txt.
500 If you are not helping to develop drivers, say N.
501
502 config MMIOTRACE_TEST
503 tristate "Test module for mmiotrace"
504 depends on MMIOTRACE && m
505 help
506 This is a dumb module for testing mmiotrace. It is very dangerous
507 as it will write garbage to IO memory starting at a given address.
508 However, it should be safe to use on e.g. unused portion of VRAM.
509
510 Say N, unless you absolutely know what you are doing.
511
512 config RING_BUFFER_BENCHMARK
513 tristate "Ring buffer benchmark stress tester"
514 depends on RING_BUFFER
515 help
516 This option creates a test to stress the ring buffer and benchmark it.
517 It creates its own ring buffer such that it will not interfere with
518 any other users of the ring buffer (such as ftrace). It then creates
519 a producer and consumer that will run for 10 seconds and sleep for
520 10 seconds. Each interval it will print out the number of events
521 it recorded and give a rough estimate of how long each iteration took.
522
523 It does not disable interrupts or raise its priority, so it may be
524 affected by processes that are running.
525
526 If unsure, say N.
527
528 endif # FTRACE
529
530 endif # TRACING_SUPPORT
531
This page took 0.041649 seconds and 5 git commands to generate.