Merge commit 'v2.6.28-rc7' into tracing/core
[deliverable/linux.git] / kernel / trace / Kconfig
1 #
2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
4 #
5
6 config USER_STACKTRACE_SUPPORT
7 bool
8
9 config NOP_TRACER
10 bool
11
12 config HAVE_FUNCTION_TRACER
13 bool
14
15 config HAVE_FUNCTION_GRAPH_TRACER
16 bool
17
18 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
19 bool
20 help
21 This gets selected when the arch tests the function_trace_stop
22 variable at the mcount call site. Otherwise, this variable
23 is tested by the called function.
24
25 config HAVE_DYNAMIC_FTRACE
26 bool
27
28 config HAVE_FTRACE_MCOUNT_RECORD
29 bool
30
31 config HAVE_HW_BRANCH_TRACER
32 bool
33
34 config TRACER_MAX_TRACE
35 bool
36
37 config RING_BUFFER
38 bool
39
40 config TRACING
41 bool
42 select DEBUG_FS
43 select RING_BUFFER
44 select STACKTRACE if STACKTRACE_SUPPORT
45 select TRACEPOINTS
46 select NOP_TRACER
47
48 menu "Tracers"
49
50 config FUNCTION_TRACER
51 bool "Kernel Function Tracer"
52 depends on HAVE_FUNCTION_TRACER
53 depends on DEBUG_KERNEL
54 select FRAME_POINTER
55 select TRACING
56 select CONTEXT_SWITCH_TRACER
57 help
58 Enable the kernel to trace every kernel function. This is done
59 by using a compiler feature to insert a small, 5-byte No-Operation
60 instruction to the beginning of every kernel function, which NOP
61 sequence is then dynamically patched into a tracer call when
62 tracing is enabled by the administrator. If it's runtime disabled
63 (the bootup default), then the overhead of the instructions is very
64 small and not measurable even in micro-benchmarks.
65
66 config FUNCTION_GRAPH_TRACER
67 bool "Kernel Function Graph Tracer"
68 depends on HAVE_FUNCTION_GRAPH_TRACER
69 depends on FUNCTION_TRACER
70 help
71 Enable the kernel to trace a function at both its return
72 and its entry.
73 It's first purpose is to trace the duration of functions and
74 draw a call graph for each thread with some informations like
75 the return value.
76 This is done by setting the current return address on the current
77 task structure into a stack of calls.
78
79 config IRQSOFF_TRACER
80 bool "Interrupts-off Latency Tracer"
81 default n
82 depends on TRACE_IRQFLAGS_SUPPORT
83 depends on GENERIC_TIME
84 depends on DEBUG_KERNEL
85 select TRACE_IRQFLAGS
86 select TRACING
87 select TRACER_MAX_TRACE
88 help
89 This option measures the time spent in irqs-off critical
90 sections, with microsecond accuracy.
91
92 The default measurement method is a maximum search, which is
93 disabled by default and can be runtime (re-)started
94 via:
95
96 echo 0 > /debugfs/tracing/tracing_max_latency
97
98 (Note that kernel size and overhead increases with this option
99 enabled. This option and the preempt-off timing option can be
100 used together or separately.)
101
102 config PREEMPT_TRACER
103 bool "Preemption-off Latency Tracer"
104 default n
105 depends on GENERIC_TIME
106 depends on PREEMPT
107 depends on DEBUG_KERNEL
108 select TRACING
109 select TRACER_MAX_TRACE
110 help
111 This option measures the time spent in preemption off critical
112 sections, with microsecond accuracy.
113
114 The default measurement method is a maximum search, which is
115 disabled by default and can be runtime (re-)started
116 via:
117
118 echo 0 > /debugfs/tracing/tracing_max_latency
119
120 (Note that kernel size and overhead increases with this option
121 enabled. This option and the irqs-off timing option can be
122 used together or separately.)
123
124 config SYSPROF_TRACER
125 bool "Sysprof Tracer"
126 depends on X86
127 select TRACING
128 help
129 This tracer provides the trace needed by the 'Sysprof' userspace
130 tool.
131
132 config SCHED_TRACER
133 bool "Scheduling Latency Tracer"
134 depends on DEBUG_KERNEL
135 select TRACING
136 select CONTEXT_SWITCH_TRACER
137 select TRACER_MAX_TRACE
138 help
139 This tracer tracks the latency of the highest priority task
140 to be scheduled in, starting from the point it has woken up.
141
142 config CONTEXT_SWITCH_TRACER
143 bool "Trace process context switches"
144 depends on DEBUG_KERNEL
145 select TRACING
146 select MARKERS
147 help
148 This tracer gets called from the context switch and records
149 all switching of tasks.
150
151 config BOOT_TRACER
152 bool "Trace boot initcalls"
153 depends on DEBUG_KERNEL
154 select TRACING
155 select CONTEXT_SWITCH_TRACER
156 help
157 This tracer helps developers to optimize boot times: it records
158 the timings of the initcalls and traces key events and the identity
159 of tasks that can cause boot delays, such as context-switches.
160
161 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
162 produce pretty graphics about boot inefficiencies, giving a visual
163 representation of the delays during initcalls - but the raw
164 /debug/tracing/trace text output is readable too.
165
166 ( Note that tracing self tests can't be enabled if this tracer is
167 selected, because the self-tests are an initcall as well and that
168 would invalidate the boot trace. )
169
170 config TRACE_BRANCH_PROFILING
171 bool "Trace likely/unlikely profiler"
172 depends on DEBUG_KERNEL
173 select TRACING
174 help
175 This tracer profiles all the the likely and unlikely macros
176 in the kernel. It will display the results in:
177
178 /debugfs/tracing/profile_annotated_branch
179
180 Note: this will add a significant overhead, only turn this
181 on if you need to profile the system's use of these macros.
182
183 Say N if unsure.
184
185 config PROFILE_ALL_BRANCHES
186 bool "Profile all if conditionals"
187 depends on TRACE_BRANCH_PROFILING
188 help
189 This tracer profiles all branch conditions. Every if ()
190 taken in the kernel is recorded whether it hit or miss.
191 The results will be displayed in:
192
193 /debugfs/tracing/profile_branch
194
195 This configuration, when enabled, will impose a great overhead
196 on the system. This should only be enabled when the system
197 is to be analyzed
198
199 Say N if unsure.
200
201 config TRACING_BRANCHES
202 bool
203 help
204 Selected by tracers that will trace the likely and unlikely
205 conditions. This prevents the tracers themselves from being
206 profiled. Profiling the tracing infrastructure can only happen
207 when the likelys and unlikelys are not being traced.
208
209 config BRANCH_TRACER
210 bool "Trace likely/unlikely instances"
211 depends on TRACE_BRANCH_PROFILING
212 select TRACING_BRANCHES
213 help
214 This traces the events of likely and unlikely condition
215 calls in the kernel. The difference between this and the
216 "Trace likely/unlikely profiler" is that this is not a
217 histogram of the callers, but actually places the calling
218 events into a running trace buffer to see when and where the
219 events happened, as well as their results.
220
221 Say N if unsure.
222
223 config POWER_TRACER
224 bool "Trace power consumption behavior"
225 depends on DEBUG_KERNEL
226 depends on X86
227 select TRACING
228 help
229 This tracer helps developers to analyze and optimize the kernels
230 power management decisions, specifically the C-state and P-state
231 behavior.
232
233
234 config STACK_TRACER
235 bool "Trace max stack"
236 depends on HAVE_FUNCTION_TRACER
237 depends on DEBUG_KERNEL
238 select FUNCTION_TRACER
239 select STACKTRACE
240 help
241 This special tracer records the maximum stack footprint of the
242 kernel and displays it in debugfs/tracing/stack_trace.
243
244 This tracer works by hooking into every function call that the
245 kernel executes, and keeping a maximum stack depth value and
246 stack-trace saved. Because this logic has to execute in every
247 kernel function, all the time, this option can slow down the
248 kernel measurably and is generally intended for kernel
249 developers only.
250
251 Say N if unsure.
252
253 config BTS_TRACER
254 depends on HAVE_HW_BRANCH_TRACER
255 bool "Trace branches"
256 select TRACING
257 help
258 This tracer records all branches on the system in a circular
259 buffer giving access to the last N branches for each cpu.
260
261 config DYNAMIC_FTRACE
262 bool "enable/disable ftrace tracepoints dynamically"
263 depends on FUNCTION_TRACER
264 depends on HAVE_DYNAMIC_FTRACE
265 depends on DEBUG_KERNEL
266 default y
267 help
268 This option will modify all the calls to ftrace dynamically
269 (will patch them out of the binary image and replaces them
270 with a No-Op instruction) as they are called. A table is
271 created to dynamically enable them again.
272
273 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
274 has native performance as long as no tracing is active.
275
276 The changes to the code are done by a kernel thread that
277 wakes up once a second and checks to see if any ftrace calls
278 were made. If so, it runs stop_machine (stops all CPUS)
279 and modifies the code to jump over the call to ftrace.
280
281 config FTRACE_MCOUNT_RECORD
282 def_bool y
283 depends on DYNAMIC_FTRACE
284 depends on HAVE_FTRACE_MCOUNT_RECORD
285
286 config FTRACE_SELFTEST
287 bool
288
289 config FTRACE_STARTUP_TEST
290 bool "Perform a startup test on ftrace"
291 depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
292 select FTRACE_SELFTEST
293 help
294 This option performs a series of startup tests on ftrace. On bootup
295 a series of tests are made to verify that the tracer is
296 functioning properly. It will do tests on all the configured
297 tracers of ftrace.
298
299 endmenu
This page took 0.036265 seconds and 6 git commands to generate.