Merge branch 'topic/hda-gateway' into topic/hda
[deliverable/linux.git] / kernel / trace / Kconfig
1 #
2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
4 #
5
6 config USER_STACKTRACE_SUPPORT
7 bool
8
9 config NOP_TRACER
10 bool
11
12 config HAVE_FUNCTION_TRACER
13 bool
14
15 config HAVE_FUNCTION_GRAPH_TRACER
16 bool
17
18 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
19 bool
20 help
21 This gets selected when the arch tests the function_trace_stop
22 variable at the mcount call site. Otherwise, this variable
23 is tested by the called function.
24
25 config HAVE_DYNAMIC_FTRACE
26 bool
27
28 config HAVE_FTRACE_MCOUNT_RECORD
29 bool
30
31 config HAVE_HW_BRANCH_TRACER
32 bool
33
34 config TRACER_MAX_TRACE
35 bool
36
37 config RING_BUFFER
38 bool
39
40 config TRACING
41 bool
42 select DEBUG_FS
43 select RING_BUFFER
44 select STACKTRACE if STACKTRACE_SUPPORT
45 select TRACEPOINTS
46 select NOP_TRACER
47
48 menu "Tracers"
49
50 config FUNCTION_TRACER
51 bool "Kernel Function Tracer"
52 depends on HAVE_FUNCTION_TRACER
53 depends on DEBUG_KERNEL
54 select FRAME_POINTER
55 select TRACING
56 select CONTEXT_SWITCH_TRACER
57 help
58 Enable the kernel to trace every kernel function. This is done
59 by using a compiler feature to insert a small, 5-byte No-Operation
60 instruction to the beginning of every kernel function, which NOP
61 sequence is then dynamically patched into a tracer call when
62 tracing is enabled by the administrator. If it's runtime disabled
63 (the bootup default), then the overhead of the instructions is very
64 small and not measurable even in micro-benchmarks.
65
66 config FUNCTION_GRAPH_TRACER
67 bool "Kernel Function Graph Tracer"
68 depends on HAVE_FUNCTION_GRAPH_TRACER
69 depends on FUNCTION_TRACER
70 default y
71 help
72 Enable the kernel to trace a function at both its return
73 and its entry.
74 It's first purpose is to trace the duration of functions and
75 draw a call graph for each thread with some informations like
76 the return value.
77 This is done by setting the current return address on the current
78 task structure into a stack of calls.
79
80 config IRQSOFF_TRACER
81 bool "Interrupts-off Latency Tracer"
82 default n
83 depends on TRACE_IRQFLAGS_SUPPORT
84 depends on GENERIC_TIME
85 depends on DEBUG_KERNEL
86 select TRACE_IRQFLAGS
87 select TRACING
88 select TRACER_MAX_TRACE
89 help
90 This option measures the time spent in irqs-off critical
91 sections, with microsecond accuracy.
92
93 The default measurement method is a maximum search, which is
94 disabled by default and can be runtime (re-)started
95 via:
96
97 echo 0 > /debugfs/tracing/tracing_max_latency
98
99 (Note that kernel size and overhead increases with this option
100 enabled. This option and the preempt-off timing option can be
101 used together or separately.)
102
103 config PREEMPT_TRACER
104 bool "Preemption-off Latency Tracer"
105 default n
106 depends on GENERIC_TIME
107 depends on PREEMPT
108 depends on DEBUG_KERNEL
109 select TRACING
110 select TRACER_MAX_TRACE
111 help
112 This option measures the time spent in preemption off critical
113 sections, with microsecond accuracy.
114
115 The default measurement method is a maximum search, which is
116 disabled by default and can be runtime (re-)started
117 via:
118
119 echo 0 > /debugfs/tracing/tracing_max_latency
120
121 (Note that kernel size and overhead increases with this option
122 enabled. This option and the irqs-off timing option can be
123 used together or separately.)
124
125 config SYSPROF_TRACER
126 bool "Sysprof Tracer"
127 depends on X86
128 select TRACING
129 help
130 This tracer provides the trace needed by the 'Sysprof' userspace
131 tool.
132
133 config SCHED_TRACER
134 bool "Scheduling Latency Tracer"
135 depends on DEBUG_KERNEL
136 select TRACING
137 select CONTEXT_SWITCH_TRACER
138 select TRACER_MAX_TRACE
139 help
140 This tracer tracks the latency of the highest priority task
141 to be scheduled in, starting from the point it has woken up.
142
143 config CONTEXT_SWITCH_TRACER
144 bool "Trace process context switches"
145 depends on DEBUG_KERNEL
146 select TRACING
147 select MARKERS
148 help
149 This tracer gets called from the context switch and records
150 all switching of tasks.
151
152 config BOOT_TRACER
153 bool "Trace boot initcalls"
154 depends on DEBUG_KERNEL
155 select TRACING
156 select CONTEXT_SWITCH_TRACER
157 help
158 This tracer helps developers to optimize boot times: it records
159 the timings of the initcalls and traces key events and the identity
160 of tasks that can cause boot delays, such as context-switches.
161
162 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
163 produce pretty graphics about boot inefficiencies, giving a visual
164 representation of the delays during initcalls - but the raw
165 /debug/tracing/trace text output is readable too.
166
167 ( Note that tracing self tests can't be enabled if this tracer is
168 selected, because the self-tests are an initcall as well and that
169 would invalidate the boot trace. )
170
171 config TRACE_BRANCH_PROFILING
172 bool "Trace likely/unlikely profiler"
173 depends on DEBUG_KERNEL
174 select TRACING
175 help
176 This tracer profiles all the the likely and unlikely macros
177 in the kernel. It will display the results in:
178
179 /debugfs/tracing/profile_annotated_branch
180
181 Note: this will add a significant overhead, only turn this
182 on if you need to profile the system's use of these macros.
183
184 Say N if unsure.
185
186 config PROFILE_ALL_BRANCHES
187 bool "Profile all if conditionals"
188 depends on TRACE_BRANCH_PROFILING
189 help
190 This tracer profiles all branch conditions. Every if ()
191 taken in the kernel is recorded whether it hit or miss.
192 The results will be displayed in:
193
194 /debugfs/tracing/profile_branch
195
196 This configuration, when enabled, will impose a great overhead
197 on the system. This should only be enabled when the system
198 is to be analyzed
199
200 Say N if unsure.
201
202 config TRACING_BRANCHES
203 bool
204 help
205 Selected by tracers that will trace the likely and unlikely
206 conditions. This prevents the tracers themselves from being
207 profiled. Profiling the tracing infrastructure can only happen
208 when the likelys and unlikelys are not being traced.
209
210 config BRANCH_TRACER
211 bool "Trace likely/unlikely instances"
212 depends on TRACE_BRANCH_PROFILING
213 select TRACING_BRANCHES
214 help
215 This traces the events of likely and unlikely condition
216 calls in the kernel. The difference between this and the
217 "Trace likely/unlikely profiler" is that this is not a
218 histogram of the callers, but actually places the calling
219 events into a running trace buffer to see when and where the
220 events happened, as well as their results.
221
222 Say N if unsure.
223
224 config POWER_TRACER
225 bool "Trace power consumption behavior"
226 depends on DEBUG_KERNEL
227 depends on X86
228 select TRACING
229 help
230 This tracer helps developers to analyze and optimize the kernels
231 power management decisions, specifically the C-state and P-state
232 behavior.
233
234
235 config STACK_TRACER
236 bool "Trace max stack"
237 depends on HAVE_FUNCTION_TRACER
238 depends on DEBUG_KERNEL
239 select FUNCTION_TRACER
240 select STACKTRACE
241 help
242 This special tracer records the maximum stack footprint of the
243 kernel and displays it in debugfs/tracing/stack_trace.
244
245 This tracer works by hooking into every function call that the
246 kernel executes, and keeping a maximum stack depth value and
247 stack-trace saved. If this is configured with DYNAMIC_FTRACE
248 then it will not have any overhead while the stack tracer
249 is disabled.
250
251 To enable the stack tracer on bootup, pass in 'stacktrace'
252 on the kernel command line.
253
254 The stack tracer can also be enabled or disabled via the
255 sysctl kernel.stack_tracer_enabled
256
257 Say N if unsure.
258
259 config HW_BRANCH_TRACER
260 depends on HAVE_HW_BRANCH_TRACER
261 bool "Trace hw branches"
262 select TRACING
263 help
264 This tracer records all branches on the system in a circular
265 buffer giving access to the last N branches for each cpu.
266
267 config DYNAMIC_FTRACE
268 bool "enable/disable ftrace tracepoints dynamically"
269 depends on FUNCTION_TRACER
270 depends on HAVE_DYNAMIC_FTRACE
271 depends on DEBUG_KERNEL
272 default y
273 help
274 This option will modify all the calls to ftrace dynamically
275 (will patch them out of the binary image and replaces them
276 with a No-Op instruction) as they are called. A table is
277 created to dynamically enable them again.
278
279 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
280 has native performance as long as no tracing is active.
281
282 The changes to the code are done by a kernel thread that
283 wakes up once a second and checks to see if any ftrace calls
284 were made. If so, it runs stop_machine (stops all CPUS)
285 and modifies the code to jump over the call to ftrace.
286
287 config FTRACE_MCOUNT_RECORD
288 def_bool y
289 depends on DYNAMIC_FTRACE
290 depends on HAVE_FTRACE_MCOUNT_RECORD
291
292 config FTRACE_SELFTEST
293 bool
294
295 config FTRACE_STARTUP_TEST
296 bool "Perform a startup test on ftrace"
297 depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
298 select FTRACE_SELFTEST
299 help
300 This option performs a series of startup tests on ftrace. On bootup
301 a series of tests are made to verify that the tracer is
302 functioning properly. It will do tests on all the configured
303 tracers of ftrace.
304
305 endmenu
This page took 0.037942 seconds and 6 git commands to generate.