Merge branch 'docs-next' of git://git.lwn.net/linux-2.6
[deliverable/linux.git] / Documentation / trace / ftrace.txt
1 ftrace - Function Tracer
2 ========================
3
4 Copyright 2008 Red Hat Inc.
5 Author: Steven Rostedt <srostedt@redhat.com>
6 License: The GNU Free Documentation License, Version 1.2
7 (dual licensed under the GPL v2)
8 Reviewers: Elias Oltmanns, Randy Dunlap, Andrew Morton,
9 John Kacur, and David Teigland.
10 Written for: 2.6.28-rc2
11
12 Introduction
13 ------------
14
15 Ftrace is an internal tracer designed to help out developers and
16 designers of systems to find what is going on inside the kernel.
17 It can be used for debugging or analyzing latencies and
18 performance issues that take place outside of user-space.
19
20 Although ftrace is the function tracer, it also includes an
21 infrastructure that allows for other types of tracing. Some of
22 the tracers that are currently in ftrace include a tracer to
23 trace context switches, the time it takes for a high priority
24 task to run after it was woken up, the time interrupts are
25 disabled, and more (ftrace allows for tracer plugins, which
26 means that the list of tracers can always grow).
27
28
29 The File System
30 ---------------
31
32 Ftrace uses the debugfs file system to hold the control files as
33 well as the files to display output.
34
35 When debugfs is configured into the kernel (which selecting any ftrace
36 option will do) the directory /sys/kernel/debug will be created. To mount
37 this directory, you can add to your /etc/fstab file:
38
39 debugfs /sys/kernel/debug debugfs defaults 0 0
40
41 Or you can mount it at run time with:
42
43 mount -t debugfs nodev /sys/kernel/debug
44
45 For quicker access to that directory you may want to make a soft link to
46 it:
47
48 ln -s /sys/kernel/debug /debug
49
50 Any selected ftrace option will also create a directory called tracing
51 within the debugfs. The rest of the document will assume that you are in
52 the ftrace directory (cd /sys/kernel/debug/tracing) and will only concentrate
53 on the files within that directory and not distract from the content with
54 the extended "/sys/kernel/debug/tracing" path name.
55
56 That's it! (assuming that you have ftrace configured into your kernel)
57
58 After mounting the debugfs, you can see a directory called
59 "tracing". This directory contains the control and output files
60 of ftrace. Here is a list of some of the key files:
61
62
63 Note: all time values are in microseconds.
64
65 current_tracer:
66
67 This is used to set or display the current tracer
68 that is configured.
69
70 available_tracers:
71
72 This holds the different types of tracers that
73 have been compiled into the kernel. The
74 tracers listed here can be configured by
75 echoing their name into current_tracer.
76
77 tracing_enabled:
78
79 This sets or displays whether the current_tracer
80 is activated and tracing or not. Echo 0 into this
81 file to disable the tracer or 1 to enable it.
82
83 trace:
84
85 This file holds the output of the trace in a human
86 readable format (described below).
87
88 trace_pipe:
89
90 The output is the same as the "trace" file but this
91 file is meant to be streamed with live tracing.
92 Reads from this file will block until new data is
93 retrieved. Unlike the "trace" file, this file is a
94 consumer. This means reading from this file causes
95 sequential reads to display more current data. Once
96 data is read from this file, it is consumed, and
97 will not be read again with a sequential read. The
98 "trace" file is static, and if the tracer is not
99 adding more data,they will display the same
100 information every time they are read.
101
102 trace_options:
103
104 This file lets the user control the amount of data
105 that is displayed in one of the above output
106 files.
107
108 tracing_max_latency:
109
110 Some of the tracers record the max latency.
111 For example, the time interrupts are disabled.
112 This time is saved in this file. The max trace
113 will also be stored, and displayed by "trace".
114 A new max trace will only be recorded if the
115 latency is greater than the value in this
116 file. (in microseconds)
117
118 buffer_size_kb:
119
120 This sets or displays the number of kilobytes each CPU
121 buffer can hold. The tracer buffers are the same size
122 for each CPU. The displayed number is the size of the
123 CPU buffer and not total size of all buffers. The
124 trace buffers are allocated in pages (blocks of memory
125 that the kernel uses for allocation, usually 4 KB in size).
126 If the last page allocated has room for more bytes
127 than requested, the rest of the page will be used,
128 making the actual allocation bigger than requested.
129 ( Note, the size may not be a multiple of the page size
130 due to buffer managment overhead. )
131
132 This can only be updated when the current_tracer
133 is set to "nop".
134
135 tracing_cpumask:
136
137 This is a mask that lets the user only trace
138 on specified CPUS. The format is a hex string
139 representing the CPUS.
140
141 set_ftrace_filter:
142
143 When dynamic ftrace is configured in (see the
144 section below "dynamic ftrace"), the code is dynamically
145 modified (code text rewrite) to disable calling of the
146 function profiler (mcount). This lets tracing be configured
147 in with practically no overhead in performance. This also
148 has a side effect of enabling or disabling specific functions
149 to be traced. Echoing names of functions into this file
150 will limit the trace to only those functions.
151
152 set_ftrace_notrace:
153
154 This has an effect opposite to that of
155 set_ftrace_filter. Any function that is added here will not
156 be traced. If a function exists in both set_ftrace_filter
157 and set_ftrace_notrace, the function will _not_ be traced.
158
159 set_ftrace_pid:
160
161 Have the function tracer only trace a single thread.
162
163 set_graph_function:
164
165 Set a "trigger" function where tracing should start
166 with the function graph tracer (See the section
167 "dynamic ftrace" for more details).
168
169 available_filter_functions:
170
171 This lists the functions that ftrace
172 has processed and can trace. These are the function
173 names that you can pass to "set_ftrace_filter" or
174 "set_ftrace_notrace". (See the section "dynamic ftrace"
175 below for more details.)
176
177
178 The Tracers
179 -----------
180
181 Here is the list of current tracers that may be configured.
182
183 "function"
184
185 Function call tracer to trace all kernel functions.
186
187 "function_graph"
188
189 Similar to the function tracer except that the
190 function tracer probes the functions on their entry
191 whereas the function graph tracer traces on both entry
192 and exit of the functions. It then provides the ability
193 to draw a graph of function calls similar to C code
194 source.
195
196 "sched_switch"
197
198 Traces the context switches and wakeups between tasks.
199
200 "irqsoff"
201
202 Traces the areas that disable interrupts and saves
203 the trace with the longest max latency.
204 See tracing_max_latency. When a new max is recorded,
205 it replaces the old trace. It is best to view this
206 trace with the latency-format option enabled.
207
208 "preemptoff"
209
210 Similar to irqsoff but traces and records the amount of
211 time for which preemption is disabled.
212
213 "preemptirqsoff"
214
215 Similar to irqsoff and preemptoff, but traces and
216 records the largest time for which irqs and/or preemption
217 is disabled.
218
219 "wakeup"
220
221 Traces and records the max latency that it takes for
222 the highest priority task to get scheduled after
223 it has been woken up.
224
225 "hw-branch-tracer"
226
227 Uses the BTS CPU feature on x86 CPUs to traces all
228 branches executed.
229
230 "nop"
231
232 This is the "trace nothing" tracer. To remove all
233 tracers from tracing simply echo "nop" into
234 current_tracer.
235
236
237 Examples of using the tracer
238 ----------------------------
239
240 Here are typical examples of using the tracers when controlling
241 them only with the debugfs interface (without using any
242 user-land utilities).
243
244 Output format:
245 --------------
246
247 Here is an example of the output format of the file "trace"
248
249 --------
250 # tracer: function
251 #
252 # TASK-PID CPU# TIMESTAMP FUNCTION
253 # | | | | |
254 bash-4251 [01] 10152.583854: path_put <-path_walk
255 bash-4251 [01] 10152.583855: dput <-path_put
256 bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput
257 --------
258
259 A header is printed with the tracer name that is represented by
260 the trace. In this case the tracer is "function". Then a header
261 showing the format. Task name "bash", the task PID "4251", the
262 CPU that it was running on "01", the timestamp in <secs>.<usecs>
263 format, the function name that was traced "path_put" and the
264 parent function that called this function "path_walk". The
265 timestamp is the time at which the function was entered.
266
267 The sched_switch tracer also includes tracing of task wakeups
268 and context switches.
269
270 ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 2916:115:S
271 ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 10:115:S
272 ksoftirqd/1-7 [01] 1453.070013: 7:115:R ==> 10:115:R
273 events/1-10 [01] 1453.070013: 10:115:S ==> 2916:115:R
274 kondemand/1-2916 [01] 1453.070013: 2916:115:S ==> 7:115:R
275 ksoftirqd/1-7 [01] 1453.070013: 7:115:S ==> 0:140:R
276
277 Wake ups are represented by a "+" and the context switches are
278 shown as "==>". The format is:
279
280 Context switches:
281
282 Previous task Next Task
283
284 <pid>:<prio>:<state> ==> <pid>:<prio>:<state>
285
286 Wake ups:
287
288 Current task Task waking up
289
290 <pid>:<prio>:<state> + <pid>:<prio>:<state>
291
292 The prio is the internal kernel priority, which is the inverse
293 of the priority that is usually displayed by user-space tools.
294 Zero represents the highest priority (99). Prio 100 starts the
295 "nice" priorities with 100 being equal to nice -20 and 139 being
296 nice 19. The prio "140" is reserved for the idle task which is
297 the lowest priority thread (pid 0).
298
299
300 Latency trace format
301 --------------------
302
303 When the latency-format option is enabled, the trace file gives
304 somewhat more information to see why a latency happened.
305 Here is a typical trace.
306
307 # tracer: irqsoff
308 #
309 irqsoff latency trace v1.1.5 on 2.6.26-rc8
310 --------------------------------------------------------------------
311 latency: 97 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
312 -----------------
313 | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
314 -----------------
315 => started at: apic_timer_interrupt
316 => ended at: do_softirq
317
318 # _------=> CPU#
319 # / _-----=> irqs-off
320 # | / _----=> need-resched
321 # || / _---=> hardirq/softirq
322 # ||| / _--=> preempt-depth
323 # |||| /
324 # ||||| delay
325 # cmd pid ||||| time | caller
326 # \ / ||||| \ | /
327 <idle>-0 0d..1 0us+: trace_hardirqs_off_thunk (apic_timer_interrupt)
328 <idle>-0 0d.s. 97us : __do_softirq (do_softirq)
329 <idle>-0 0d.s1 98us : trace_hardirqs_on (do_softirq)
330
331
332 This shows that the current tracer is "irqsoff" tracing the time
333 for which interrupts were disabled. It gives the trace version
334 and the version of the kernel upon which this was executed on
335 (2.6.26-rc8). Then it displays the max latency in microsecs (97
336 us). The number of trace entries displayed and the total number
337 recorded (both are three: #3/3). The type of preemption that was
338 used (PREEMPT). VP, KP, SP, and HP are always zero and are
339 reserved for later use. #P is the number of online CPUS (#P:2).
340
341 The task is the process that was running when the latency
342 occurred. (swapper pid: 0).
343
344 The start and stop (the functions in which the interrupts were
345 disabled and enabled respectively) that caused the latencies:
346
347 apic_timer_interrupt is where the interrupts were disabled.
348 do_softirq is where they were enabled again.
349
350 The next lines after the header are the trace itself. The header
351 explains which is which.
352
353 cmd: The name of the process in the trace.
354
355 pid: The PID of that process.
356
357 CPU#: The CPU which the process was running on.
358
359 irqs-off: 'd' interrupts are disabled. '.' otherwise.
360 Note: If the architecture does not support a way to
361 read the irq flags variable, an 'X' will always
362 be printed here.
363
364 need-resched: 'N' task need_resched is set, '.' otherwise.
365
366 hardirq/softirq:
367 'H' - hard irq occurred inside a softirq.
368 'h' - hard irq is running
369 's' - soft irq is running
370 '.' - normal context.
371
372 preempt-depth: The level of preempt_disabled
373
374 The above is mostly meaningful for kernel developers.
375
376 time: When the latency-format option is enabled, the trace file
377 output includes a timestamp relative to the start of the
378 trace. This differs from the output when latency-format
379 is disabled, which includes an absolute timestamp.
380
381 delay: This is just to help catch your eye a bit better. And
382 needs to be fixed to be only relative to the same CPU.
383 The marks are determined by the difference between this
384 current trace and the next trace.
385 '!' - greater than preempt_mark_thresh (default 100)
386 '+' - greater than 1 microsecond
387 ' ' - less than or equal to 1 microsecond.
388
389 The rest is the same as the 'trace' file.
390
391
392 trace_options
393 -------------
394
395 The trace_options file is used to control what gets printed in
396 the trace output. To see what is available, simply cat the file:
397
398 cat trace_options
399 print-parent nosym-offset nosym-addr noverbose noraw nohex nobin \
400 noblock nostacktrace nosched-tree nouserstacktrace nosym-userobj
401
402 To disable one of the options, echo in the option prepended with
403 "no".
404
405 echo noprint-parent > trace_options
406
407 To enable an option, leave off the "no".
408
409 echo sym-offset > trace_options
410
411 Here are the available options:
412
413 print-parent - On function traces, display the calling (parent)
414 function as well as the function being traced.
415
416 print-parent:
417 bash-4000 [01] 1477.606694: simple_strtoul <-strict_strtoul
418
419 noprint-parent:
420 bash-4000 [01] 1477.606694: simple_strtoul
421
422
423 sym-offset - Display not only the function name, but also the
424 offset in the function. For example, instead of
425 seeing just "ktime_get", you will see
426 "ktime_get+0xb/0x20".
427
428 sym-offset:
429 bash-4000 [01] 1477.606694: simple_strtoul+0x6/0xa0
430
431 sym-addr - this will also display the function address as well
432 as the function name.
433
434 sym-addr:
435 bash-4000 [01] 1477.606694: simple_strtoul <c0339346>
436
437 verbose - This deals with the trace file when the
438 latency-format option is enabled.
439
440 bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
441 (+0.000ms): simple_strtoul (strict_strtoul)
442
443 raw - This will display raw numbers. This option is best for
444 use with user applications that can translate the raw
445 numbers better than having it done in the kernel.
446
447 hex - Similar to raw, but the numbers will be in a hexadecimal
448 format.
449
450 bin - This will print out the formats in raw binary.
451
452 block - TBD (needs update)
453
454 stacktrace - This is one of the options that changes the trace
455 itself. When a trace is recorded, so is the stack
456 of functions. This allows for back traces of
457 trace sites.
458
459 userstacktrace - This option changes the trace. It records a
460 stacktrace of the current userspace thread.
461
462 sym-userobj - when user stacktrace are enabled, look up which
463 object the address belongs to, and print a
464 relative address. This is especially useful when
465 ASLR is on, otherwise you don't get a chance to
466 resolve the address to object/file/line after
467 the app is no longer running
468
469 The lookup is performed when you read
470 trace,trace_pipe. Example:
471
472 a.out-1623 [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
473 x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
474
475 sched-tree - trace all tasks that are on the runqueue, at
476 every scheduling event. Will add overhead if
477 there's a lot of tasks running at once.
478
479 latency-format - This option changes the trace. When
480 it is enabled, the trace displays
481 additional information about the
482 latencies, as described in "Latency
483 trace format".
484
485 sched_switch
486 ------------
487
488 This tracer simply records schedule switches. Here is an example
489 of how to use it.
490
491 # echo sched_switch > current_tracer
492 # echo 1 > tracing_enabled
493 # sleep 1
494 # echo 0 > tracing_enabled
495 # cat trace
496
497 # tracer: sched_switch
498 #
499 # TASK-PID CPU# TIMESTAMP FUNCTION
500 # | | | | |
501 bash-3997 [01] 240.132281: 3997:120:R + 4055:120:R
502 bash-3997 [01] 240.132284: 3997:120:R ==> 4055:120:R
503 sleep-4055 [01] 240.132371: 4055:120:S ==> 3997:120:R
504 bash-3997 [01] 240.132454: 3997:120:R + 4055:120:S
505 bash-3997 [01] 240.132457: 3997:120:R ==> 4055:120:R
506 sleep-4055 [01] 240.132460: 4055:120:D ==> 3997:120:R
507 bash-3997 [01] 240.132463: 3997:120:R + 4055:120:D
508 bash-3997 [01] 240.132465: 3997:120:R ==> 4055:120:R
509 <idle>-0 [00] 240.132589: 0:140:R + 4:115:S
510 <idle>-0 [00] 240.132591: 0:140:R ==> 4:115:R
511 ksoftirqd/0-4 [00] 240.132595: 4:115:S ==> 0:140:R
512 <idle>-0 [00] 240.132598: 0:140:R + 4:115:S
513 <idle>-0 [00] 240.132599: 0:140:R ==> 4:115:R
514 ksoftirqd/0-4 [00] 240.132603: 4:115:S ==> 0:140:R
515 sleep-4055 [01] 240.133058: 4055:120:S ==> 3997:120:R
516 [...]
517
518
519 As we have discussed previously about this format, the header
520 shows the name of the trace and points to the options. The
521 "FUNCTION" is a misnomer since here it represents the wake ups
522 and context switches.
523
524 The sched_switch file only lists the wake ups (represented with
525 '+') and context switches ('==>') with the previous task or
526 current task first followed by the next task or task waking up.
527 The format for both of these is PID:KERNEL-PRIO:TASK-STATE.
528 Remember that the KERNEL-PRIO is the inverse of the actual
529 priority with zero (0) being the highest priority and the nice
530 values starting at 100 (nice -20). Below is a quick chart to map
531 the kernel priority to user land priorities.
532
533 Kernel Space User Space
534 ===============================================================
535 0(high) to 98(low) user RT priority 99(high) to 1(low)
536 with SCHED_RR or SCHED_FIFO
537 ---------------------------------------------------------------
538 99 sched_priority is not used in scheduling
539 decisions(it must be specified as 0)
540 ---------------------------------------------------------------
541 100(high) to 139(low) user nice -20(high) to 19(low)
542 ---------------------------------------------------------------
543 140 idle task priority
544 ---------------------------------------------------------------
545
546 The task states are:
547
548 R - running : wants to run, may not actually be running
549 S - sleep : process is waiting to be woken up (handles signals)
550 D - disk sleep (uninterruptible sleep) : process must be woken up
551 (ignores signals)
552 T - stopped : process suspended
553 t - traced : process is being traced (with something like gdb)
554 Z - zombie : process waiting to be cleaned up
555 X - unknown
556
557
558 ftrace_enabled
559 --------------
560
561 The following tracers (listed below) give different output
562 depending on whether or not the sysctl ftrace_enabled is set. To
563 set ftrace_enabled, one can either use the sysctl function or
564 set it via the proc file system interface.
565
566 sysctl kernel.ftrace_enabled=1
567
568 or
569
570 echo 1 > /proc/sys/kernel/ftrace_enabled
571
572 To disable ftrace_enabled simply replace the '1' with '0' in the
573 above commands.
574
575 When ftrace_enabled is set the tracers will also record the
576 functions that are within the trace. The descriptions of the
577 tracers will also show an example with ftrace enabled.
578
579
580 irqsoff
581 -------
582
583 When interrupts are disabled, the CPU can not react to any other
584 external event (besides NMIs and SMIs). This prevents the timer
585 interrupt from triggering or the mouse interrupt from letting
586 the kernel know of a new mouse event. The result is a latency
587 with the reaction time.
588
589 The irqsoff tracer tracks the time for which interrupts are
590 disabled. When a new maximum latency is hit, the tracer saves
591 the trace leading up to that latency point so that every time a
592 new maximum is reached, the old saved trace is discarded and the
593 new trace is saved.
594
595 To reset the maximum, echo 0 into tracing_max_latency. Here is
596 an example:
597
598 # echo irqsoff > current_tracer
599 # echo latency-format > trace_options
600 # echo 0 > tracing_max_latency
601 # echo 1 > tracing_enabled
602 # ls -ltr
603 [...]
604 # echo 0 > tracing_enabled
605 # cat trace
606 # tracer: irqsoff
607 #
608 irqsoff latency trace v1.1.5 on 2.6.26
609 --------------------------------------------------------------------
610 latency: 12 us, #3/3, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
611 -----------------
612 | task: bash-3730 (uid:0 nice:0 policy:0 rt_prio:0)
613 -----------------
614 => started at: sys_setpgid
615 => ended at: sys_setpgid
616
617 # _------=> CPU#
618 # / _-----=> irqs-off
619 # | / _----=> need-resched
620 # || / _---=> hardirq/softirq
621 # ||| / _--=> preempt-depth
622 # |||| /
623 # ||||| delay
624 # cmd pid ||||| time | caller
625 # \ / ||||| \ | /
626 bash-3730 1d... 0us : _write_lock_irq (sys_setpgid)
627 bash-3730 1d..1 1us+: _write_unlock_irq (sys_setpgid)
628 bash-3730 1d..2 14us : trace_hardirqs_on (sys_setpgid)
629
630
631 Here we see that that we had a latency of 12 microsecs (which is
632 very good). The _write_lock_irq in sys_setpgid disabled
633 interrupts. The difference between the 12 and the displayed
634 timestamp 14us occurred because the clock was incremented
635 between the time of recording the max latency and the time of
636 recording the function that had that latency.
637
638 Note the above example had ftrace_enabled not set. If we set the
639 ftrace_enabled, we get a much larger output:
640
641 # tracer: irqsoff
642 #
643 irqsoff latency trace v1.1.5 on 2.6.26-rc8
644 --------------------------------------------------------------------
645 latency: 50 us, #101/101, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
646 -----------------
647 | task: ls-4339 (uid:0 nice:0 policy:0 rt_prio:0)
648 -----------------
649 => started at: __alloc_pages_internal
650 => ended at: __alloc_pages_internal
651
652 # _------=> CPU#
653 # / _-----=> irqs-off
654 # | / _----=> need-resched
655 # || / _---=> hardirq/softirq
656 # ||| / _--=> preempt-depth
657 # |||| /
658 # ||||| delay
659 # cmd pid ||||| time | caller
660 # \ / ||||| \ | /
661 ls-4339 0...1 0us+: get_page_from_freelist (__alloc_pages_internal)
662 ls-4339 0d..1 3us : rmqueue_bulk (get_page_from_freelist)
663 ls-4339 0d..1 3us : _spin_lock (rmqueue_bulk)
664 ls-4339 0d..1 4us : add_preempt_count (_spin_lock)
665 ls-4339 0d..2 4us : __rmqueue (rmqueue_bulk)
666 ls-4339 0d..2 5us : __rmqueue_smallest (__rmqueue)
667 ls-4339 0d..2 5us : __mod_zone_page_state (__rmqueue_smallest)
668 ls-4339 0d..2 6us : __rmqueue (rmqueue_bulk)
669 ls-4339 0d..2 6us : __rmqueue_smallest (__rmqueue)
670 ls-4339 0d..2 7us : __mod_zone_page_state (__rmqueue_smallest)
671 ls-4339 0d..2 7us : __rmqueue (rmqueue_bulk)
672 ls-4339 0d..2 8us : __rmqueue_smallest (__rmqueue)
673 [...]
674 ls-4339 0d..2 46us : __rmqueue_smallest (__rmqueue)
675 ls-4339 0d..2 47us : __mod_zone_page_state (__rmqueue_smallest)
676 ls-4339 0d..2 47us : __rmqueue (rmqueue_bulk)
677 ls-4339 0d..2 48us : __rmqueue_smallest (__rmqueue)
678 ls-4339 0d..2 48us : __mod_zone_page_state (__rmqueue_smallest)
679 ls-4339 0d..2 49us : _spin_unlock (rmqueue_bulk)
680 ls-4339 0d..2 49us : sub_preempt_count (_spin_unlock)
681 ls-4339 0d..1 50us : get_page_from_freelist (__alloc_pages_internal)
682 ls-4339 0d..2 51us : trace_hardirqs_on (__alloc_pages_internal)
683
684
685
686 Here we traced a 50 microsecond latency. But we also see all the
687 functions that were called during that time. Note that by
688 enabling function tracing, we incur an added overhead. This
689 overhead may extend the latency times. But nevertheless, this
690 trace has provided some very helpful debugging information.
691
692
693 preemptoff
694 ----------
695
696 When preemption is disabled, we may be able to receive
697 interrupts but the task cannot be preempted and a higher
698 priority task must wait for preemption to be enabled again
699 before it can preempt a lower priority task.
700
701 The preemptoff tracer traces the places that disable preemption.
702 Like the irqsoff tracer, it records the maximum latency for
703 which preemption was disabled. The control of preemptoff tracer
704 is much like the irqsoff tracer.
705
706 # echo preemptoff > current_tracer
707 # echo latency-format > trace_options
708 # echo 0 > tracing_max_latency
709 # echo 1 > tracing_enabled
710 # ls -ltr
711 [...]
712 # echo 0 > tracing_enabled
713 # cat trace
714 # tracer: preemptoff
715 #
716 preemptoff latency trace v1.1.5 on 2.6.26-rc8
717 --------------------------------------------------------------------
718 latency: 29 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
719 -----------------
720 | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
721 -----------------
722 => started at: do_IRQ
723 => ended at: __do_softirq
724
725 # _------=> CPU#
726 # / _-----=> irqs-off
727 # | / _----=> need-resched
728 # || / _---=> hardirq/softirq
729 # ||| / _--=> preempt-depth
730 # |||| /
731 # ||||| delay
732 # cmd pid ||||| time | caller
733 # \ / ||||| \ | /
734 sshd-4261 0d.h. 0us+: irq_enter (do_IRQ)
735 sshd-4261 0d.s. 29us : _local_bh_enable (__do_softirq)
736 sshd-4261 0d.s1 30us : trace_preempt_on (__do_softirq)
737
738
739 This has some more changes. Preemption was disabled when an
740 interrupt came in (notice the 'h'), and was enabled while doing
741 a softirq. (notice the 's'). But we also see that interrupts
742 have been disabled when entering the preempt off section and
743 leaving it (the 'd'). We do not know if interrupts were enabled
744 in the mean time.
745
746 # tracer: preemptoff
747 #
748 preemptoff latency trace v1.1.5 on 2.6.26-rc8
749 --------------------------------------------------------------------
750 latency: 63 us, #87/87, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
751 -----------------
752 | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
753 -----------------
754 => started at: remove_wait_queue
755 => ended at: __do_softirq
756
757 # _------=> CPU#
758 # / _-----=> irqs-off
759 # | / _----=> need-resched
760 # || / _---=> hardirq/softirq
761 # ||| / _--=> preempt-depth
762 # |||| /
763 # ||||| delay
764 # cmd pid ||||| time | caller
765 # \ / ||||| \ | /
766 sshd-4261 0d..1 0us : _spin_lock_irqsave (remove_wait_queue)
767 sshd-4261 0d..1 1us : _spin_unlock_irqrestore (remove_wait_queue)
768 sshd-4261 0d..1 2us : do_IRQ (common_interrupt)
769 sshd-4261 0d..1 2us : irq_enter (do_IRQ)
770 sshd-4261 0d..1 2us : idle_cpu (irq_enter)
771 sshd-4261 0d..1 3us : add_preempt_count (irq_enter)
772 sshd-4261 0d.h1 3us : idle_cpu (irq_enter)
773 sshd-4261 0d.h. 4us : handle_fasteoi_irq (do_IRQ)
774 [...]
775 sshd-4261 0d.h. 12us : add_preempt_count (_spin_lock)
776 sshd-4261 0d.h1 12us : ack_ioapic_quirk_irq (handle_fasteoi_irq)
777 sshd-4261 0d.h1 13us : move_native_irq (ack_ioapic_quirk_irq)
778 sshd-4261 0d.h1 13us : _spin_unlock (handle_fasteoi_irq)
779 sshd-4261 0d.h1 14us : sub_preempt_count (_spin_unlock)
780 sshd-4261 0d.h1 14us : irq_exit (do_IRQ)
781 sshd-4261 0d.h1 15us : sub_preempt_count (irq_exit)
782 sshd-4261 0d..2 15us : do_softirq (irq_exit)
783 sshd-4261 0d... 15us : __do_softirq (do_softirq)
784 sshd-4261 0d... 16us : __local_bh_disable (__do_softirq)
785 sshd-4261 0d... 16us+: add_preempt_count (__local_bh_disable)
786 sshd-4261 0d.s4 20us : add_preempt_count (__local_bh_disable)
787 sshd-4261 0d.s4 21us : sub_preempt_count (local_bh_enable)
788 sshd-4261 0d.s5 21us : sub_preempt_count (local_bh_enable)
789 [...]
790 sshd-4261 0d.s6 41us : add_preempt_count (__local_bh_disable)
791 sshd-4261 0d.s6 42us : sub_preempt_count (local_bh_enable)
792 sshd-4261 0d.s7 42us : sub_preempt_count (local_bh_enable)
793 sshd-4261 0d.s5 43us : add_preempt_count (__local_bh_disable)
794 sshd-4261 0d.s5 43us : sub_preempt_count (local_bh_enable_ip)
795 sshd-4261 0d.s6 44us : sub_preempt_count (local_bh_enable_ip)
796 sshd-4261 0d.s5 44us : add_preempt_count (__local_bh_disable)
797 sshd-4261 0d.s5 45us : sub_preempt_count (local_bh_enable)
798 [...]
799 sshd-4261 0d.s. 63us : _local_bh_enable (__do_softirq)
800 sshd-4261 0d.s1 64us : trace_preempt_on (__do_softirq)
801
802
803 The above is an example of the preemptoff trace with
804 ftrace_enabled set. Here we see that interrupts were disabled
805 the entire time. The irq_enter code lets us know that we entered
806 an interrupt 'h'. Before that, the functions being traced still
807 show that it is not in an interrupt, but we can see from the
808 functions themselves that this is not the case.
809
810 Notice that __do_softirq when called does not have a
811 preempt_count. It may seem that we missed a preempt enabling.
812 What really happened is that the preempt count is held on the
813 thread's stack and we switched to the softirq stack (4K stacks
814 in effect). The code does not copy the preempt count, but
815 because interrupts are disabled, we do not need to worry about
816 it. Having a tracer like this is good for letting people know
817 what really happens inside the kernel.
818
819
820 preemptirqsoff
821 --------------
822
823 Knowing the locations that have interrupts disabled or
824 preemption disabled for the longest times is helpful. But
825 sometimes we would like to know when either preemption and/or
826 interrupts are disabled.
827
828 Consider the following code:
829
830 local_irq_disable();
831 call_function_with_irqs_off();
832 preempt_disable();
833 call_function_with_irqs_and_preemption_off();
834 local_irq_enable();
835 call_function_with_preemption_off();
836 preempt_enable();
837
838 The irqsoff tracer will record the total length of
839 call_function_with_irqs_off() and
840 call_function_with_irqs_and_preemption_off().
841
842 The preemptoff tracer will record the total length of
843 call_function_with_irqs_and_preemption_off() and
844 call_function_with_preemption_off().
845
846 But neither will trace the time that interrupts and/or
847 preemption is disabled. This total time is the time that we can
848 not schedule. To record this time, use the preemptirqsoff
849 tracer.
850
851 Again, using this trace is much like the irqsoff and preemptoff
852 tracers.
853
854 # echo preemptirqsoff > current_tracer
855 # echo latency-format > trace_options
856 # echo 0 > tracing_max_latency
857 # echo 1 > tracing_enabled
858 # ls -ltr
859 [...]
860 # echo 0 > tracing_enabled
861 # cat trace
862 # tracer: preemptirqsoff
863 #
864 preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
865 --------------------------------------------------------------------
866 latency: 293 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
867 -----------------
868 | task: ls-4860 (uid:0 nice:0 policy:0 rt_prio:0)
869 -----------------
870 => started at: apic_timer_interrupt
871 => ended at: __do_softirq
872
873 # _------=> CPU#
874 # / _-----=> irqs-off
875 # | / _----=> need-resched
876 # || / _---=> hardirq/softirq
877 # ||| / _--=> preempt-depth
878 # |||| /
879 # ||||| delay
880 # cmd pid ||||| time | caller
881 # \ / ||||| \ | /
882 ls-4860 0d... 0us!: trace_hardirqs_off_thunk (apic_timer_interrupt)
883 ls-4860 0d.s. 294us : _local_bh_enable (__do_softirq)
884 ls-4860 0d.s1 294us : trace_preempt_on (__do_softirq)
885
886
887
888 The trace_hardirqs_off_thunk is called from assembly on x86 when
889 interrupts are disabled in the assembly code. Without the
890 function tracing, we do not know if interrupts were enabled
891 within the preemption points. We do see that it started with
892 preemption enabled.
893
894 Here is a trace with ftrace_enabled set:
895
896
897 # tracer: preemptirqsoff
898 #
899 preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
900 --------------------------------------------------------------------
901 latency: 105 us, #183/183, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
902 -----------------
903 | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
904 -----------------
905 => started at: write_chan
906 => ended at: __do_softirq
907
908 # _------=> CPU#
909 # / _-----=> irqs-off
910 # | / _----=> need-resched
911 # || / _---=> hardirq/softirq
912 # ||| / _--=> preempt-depth
913 # |||| /
914 # ||||| delay
915 # cmd pid ||||| time | caller
916 # \ / ||||| \ | /
917 ls-4473 0.N.. 0us : preempt_schedule (write_chan)
918 ls-4473 0dN.1 1us : _spin_lock (schedule)
919 ls-4473 0dN.1 2us : add_preempt_count (_spin_lock)
920 ls-4473 0d..2 2us : put_prev_task_fair (schedule)
921 [...]
922 ls-4473 0d..2 13us : set_normalized_timespec (ktime_get_ts)
923 ls-4473 0d..2 13us : __switch_to (schedule)
924 sshd-4261 0d..2 14us : finish_task_switch (schedule)
925 sshd-4261 0d..2 14us : _spin_unlock_irq (finish_task_switch)
926 sshd-4261 0d..1 15us : add_preempt_count (_spin_lock_irqsave)
927 sshd-4261 0d..2 16us : _spin_unlock_irqrestore (hrtick_set)
928 sshd-4261 0d..2 16us : do_IRQ (common_interrupt)
929 sshd-4261 0d..2 17us : irq_enter (do_IRQ)
930 sshd-4261 0d..2 17us : idle_cpu (irq_enter)
931 sshd-4261 0d..2 18us : add_preempt_count (irq_enter)
932 sshd-4261 0d.h2 18us : idle_cpu (irq_enter)
933 sshd-4261 0d.h. 18us : handle_fasteoi_irq (do_IRQ)
934 sshd-4261 0d.h. 19us : _spin_lock (handle_fasteoi_irq)
935 sshd-4261 0d.h. 19us : add_preempt_count (_spin_lock)
936 sshd-4261 0d.h1 20us : _spin_unlock (handle_fasteoi_irq)
937 sshd-4261 0d.h1 20us : sub_preempt_count (_spin_unlock)
938 [...]
939 sshd-4261 0d.h1 28us : _spin_unlock (handle_fasteoi_irq)
940 sshd-4261 0d.h1 29us : sub_preempt_count (_spin_unlock)
941 sshd-4261 0d.h2 29us : irq_exit (do_IRQ)
942 sshd-4261 0d.h2 29us : sub_preempt_count (irq_exit)
943 sshd-4261 0d..3 30us : do_softirq (irq_exit)
944 sshd-4261 0d... 30us : __do_softirq (do_softirq)
945 sshd-4261 0d... 31us : __local_bh_disable (__do_softirq)
946 sshd-4261 0d... 31us+: add_preempt_count (__local_bh_disable)
947 sshd-4261 0d.s4 34us : add_preempt_count (__local_bh_disable)
948 [...]
949 sshd-4261 0d.s3 43us : sub_preempt_count (local_bh_enable_ip)
950 sshd-4261 0d.s4 44us : sub_preempt_count (local_bh_enable_ip)
951 sshd-4261 0d.s3 44us : smp_apic_timer_interrupt (apic_timer_interrupt)
952 sshd-4261 0d.s3 45us : irq_enter (smp_apic_timer_interrupt)
953 sshd-4261 0d.s3 45us : idle_cpu (irq_enter)
954 sshd-4261 0d.s3 46us : add_preempt_count (irq_enter)
955 sshd-4261 0d.H3 46us : idle_cpu (irq_enter)
956 sshd-4261 0d.H3 47us : hrtimer_interrupt (smp_apic_timer_interrupt)
957 sshd-4261 0d.H3 47us : ktime_get (hrtimer_interrupt)
958 [...]
959 sshd-4261 0d.H3 81us : tick_program_event (hrtimer_interrupt)
960 sshd-4261 0d.H3 82us : ktime_get (tick_program_event)
961 sshd-4261 0d.H3 82us : ktime_get_ts (ktime_get)
962 sshd-4261 0d.H3 83us : getnstimeofday (ktime_get_ts)
963 sshd-4261 0d.H3 83us : set_normalized_timespec (ktime_get_ts)
964 sshd-4261 0d.H3 84us : clockevents_program_event (tick_program_event)
965 sshd-4261 0d.H3 84us : lapic_next_event (clockevents_program_event)
966 sshd-4261 0d.H3 85us : irq_exit (smp_apic_timer_interrupt)
967 sshd-4261 0d.H3 85us : sub_preempt_count (irq_exit)
968 sshd-4261 0d.s4 86us : sub_preempt_count (irq_exit)
969 sshd-4261 0d.s3 86us : add_preempt_count (__local_bh_disable)
970 [...]
971 sshd-4261 0d.s1 98us : sub_preempt_count (net_rx_action)
972 sshd-4261 0d.s. 99us : add_preempt_count (_spin_lock_irq)
973 sshd-4261 0d.s1 99us+: _spin_unlock_irq (run_timer_softirq)
974 sshd-4261 0d.s. 104us : _local_bh_enable (__do_softirq)
975 sshd-4261 0d.s. 104us : sub_preempt_count (_local_bh_enable)
976 sshd-4261 0d.s. 105us : _local_bh_enable (__do_softirq)
977 sshd-4261 0d.s1 105us : trace_preempt_on (__do_softirq)
978
979
980 This is a very interesting trace. It started with the preemption
981 of the ls task. We see that the task had the "need_resched" bit
982 set via the 'N' in the trace. Interrupts were disabled before
983 the spin_lock at the beginning of the trace. We see that a
984 schedule took place to run sshd. When the interrupts were
985 enabled, we took an interrupt. On return from the interrupt
986 handler, the softirq ran. We took another interrupt while
987 running the softirq as we see from the capital 'H'.
988
989
990 wakeup
991 ------
992
993 In a Real-Time environment it is very important to know the
994 wakeup time it takes for the highest priority task that is woken
995 up to the time that it executes. This is also known as "schedule
996 latency". I stress the point that this is about RT tasks. It is
997 also important to know the scheduling latency of non-RT tasks,
998 but the average schedule latency is better for non-RT tasks.
999 Tools like LatencyTop are more appropriate for such
1000 measurements.
1001
1002 Real-Time environments are interested in the worst case latency.
1003 That is the longest latency it takes for something to happen,
1004 and not the average. We can have a very fast scheduler that may
1005 only have a large latency once in a while, but that would not
1006 work well with Real-Time tasks. The wakeup tracer was designed
1007 to record the worst case wakeups of RT tasks. Non-RT tasks are
1008 not recorded because the tracer only records one worst case and
1009 tracing non-RT tasks that are unpredictable will overwrite the
1010 worst case latency of RT tasks.
1011
1012 Since this tracer only deals with RT tasks, we will run this
1013 slightly differently than we did with the previous tracers.
1014 Instead of performing an 'ls', we will run 'sleep 1' under
1015 'chrt' which changes the priority of the task.
1016
1017 # echo wakeup > current_tracer
1018 # echo latency-format > trace_options
1019 # echo 0 > tracing_max_latency
1020 # echo 1 > tracing_enabled
1021 # chrt -f 5 sleep 1
1022 # echo 0 > tracing_enabled
1023 # cat trace
1024 # tracer: wakeup
1025 #
1026 wakeup latency trace v1.1.5 on 2.6.26-rc8
1027 --------------------------------------------------------------------
1028 latency: 4 us, #2/2, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
1029 -----------------
1030 | task: sleep-4901 (uid:0 nice:0 policy:1 rt_prio:5)
1031 -----------------
1032
1033 # _------=> CPU#
1034 # / _-----=> irqs-off
1035 # | / _----=> need-resched
1036 # || / _---=> hardirq/softirq
1037 # ||| / _--=> preempt-depth
1038 # |||| /
1039 # ||||| delay
1040 # cmd pid ||||| time | caller
1041 # \ / ||||| \ | /
1042 <idle>-0 1d.h4 0us+: try_to_wake_up (wake_up_process)
1043 <idle>-0 1d..4 4us : schedule (cpu_idle)
1044
1045
1046 Running this on an idle system, we see that it only took 4
1047 microseconds to perform the task switch. Note, since the trace
1048 marker in the schedule is before the actual "switch", we stop
1049 the tracing when the recorded task is about to schedule in. This
1050 may change if we add a new marker at the end of the scheduler.
1051
1052 Notice that the recorded task is 'sleep' with the PID of 4901
1053 and it has an rt_prio of 5. This priority is user-space priority
1054 and not the internal kernel priority. The policy is 1 for
1055 SCHED_FIFO and 2 for SCHED_RR.
1056
1057 Doing the same with chrt -r 5 and ftrace_enabled set.
1058
1059 # tracer: wakeup
1060 #
1061 wakeup latency trace v1.1.5 on 2.6.26-rc8
1062 --------------------------------------------------------------------
1063 latency: 50 us, #60/60, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
1064 -----------------
1065 | task: sleep-4068 (uid:0 nice:0 policy:2 rt_prio:5)
1066 -----------------
1067
1068 # _------=> CPU#
1069 # / _-----=> irqs-off
1070 # | / _----=> need-resched
1071 # || / _---=> hardirq/softirq
1072 # ||| / _--=> preempt-depth
1073 # |||| /
1074 # ||||| delay
1075 # cmd pid ||||| time | caller
1076 # \ / ||||| \ | /
1077 ksoftirq-7 1d.H3 0us : try_to_wake_up (wake_up_process)
1078 ksoftirq-7 1d.H4 1us : sub_preempt_count (marker_probe_cb)
1079 ksoftirq-7 1d.H3 2us : check_preempt_wakeup (try_to_wake_up)
1080 ksoftirq-7 1d.H3 3us : update_curr (check_preempt_wakeup)
1081 ksoftirq-7 1d.H3 4us : calc_delta_mine (update_curr)
1082 ksoftirq-7 1d.H3 5us : __resched_task (check_preempt_wakeup)
1083 ksoftirq-7 1d.H3 6us : task_wake_up_rt (try_to_wake_up)
1084 ksoftirq-7 1d.H3 7us : _spin_unlock_irqrestore (try_to_wake_up)
1085 [...]
1086 ksoftirq-7 1d.H2 17us : irq_exit (smp_apic_timer_interrupt)
1087 ksoftirq-7 1d.H2 18us : sub_preempt_count (irq_exit)
1088 ksoftirq-7 1d.s3 19us : sub_preempt_count (irq_exit)
1089 ksoftirq-7 1..s2 20us : rcu_process_callbacks (__do_softirq)
1090 [...]
1091 ksoftirq-7 1..s2 26us : __rcu_process_callbacks (rcu_process_callbacks)
1092 ksoftirq-7 1d.s2 27us : _local_bh_enable (__do_softirq)
1093 ksoftirq-7 1d.s2 28us : sub_preempt_count (_local_bh_enable)
1094 ksoftirq-7 1.N.3 29us : sub_preempt_count (ksoftirqd)
1095 ksoftirq-7 1.N.2 30us : _cond_resched (ksoftirqd)
1096 ksoftirq-7 1.N.2 31us : __cond_resched (_cond_resched)
1097 ksoftirq-7 1.N.2 32us : add_preempt_count (__cond_resched)
1098 ksoftirq-7 1.N.2 33us : schedule (__cond_resched)
1099 ksoftirq-7 1.N.2 33us : add_preempt_count (schedule)
1100 ksoftirq-7 1.N.3 34us : hrtick_clear (schedule)
1101 ksoftirq-7 1dN.3 35us : _spin_lock (schedule)
1102 ksoftirq-7 1dN.3 36us : add_preempt_count (_spin_lock)
1103 ksoftirq-7 1d..4 37us : put_prev_task_fair (schedule)
1104 ksoftirq-7 1d..4 38us : update_curr (put_prev_task_fair)
1105 [...]
1106 ksoftirq-7 1d..5 47us : _spin_trylock (tracing_record_cmdline)
1107 ksoftirq-7 1d..5 48us : add_preempt_count (_spin_trylock)
1108 ksoftirq-7 1d..6 49us : _spin_unlock (tracing_record_cmdline)
1109 ksoftirq-7 1d..6 49us : sub_preempt_count (_spin_unlock)
1110 ksoftirq-7 1d..4 50us : schedule (__cond_resched)
1111
1112 The interrupt went off while running ksoftirqd. This task runs
1113 at SCHED_OTHER. Why did not we see the 'N' set early? This may
1114 be a harmless bug with x86_32 and 4K stacks. On x86_32 with 4K
1115 stacks configured, the interrupt and softirq run with their own
1116 stack. Some information is held on the top of the task's stack
1117 (need_resched and preempt_count are both stored there). The
1118 setting of the NEED_RESCHED bit is done directly to the task's
1119 stack, but the reading of the NEED_RESCHED is done by looking at
1120 the current stack, which in this case is the stack for the hard
1121 interrupt. This hides the fact that NEED_RESCHED has been set.
1122 We do not see the 'N' until we switch back to the task's
1123 assigned stack.
1124
1125 function
1126 --------
1127
1128 This tracer is the function tracer. Enabling the function tracer
1129 can be done from the debug file system. Make sure the
1130 ftrace_enabled is set; otherwise this tracer is a nop.
1131
1132 # sysctl kernel.ftrace_enabled=1
1133 # echo function > current_tracer
1134 # echo 1 > tracing_enabled
1135 # usleep 1
1136 # echo 0 > tracing_enabled
1137 # cat trace
1138 # tracer: function
1139 #
1140 # TASK-PID CPU# TIMESTAMP FUNCTION
1141 # | | | | |
1142 bash-4003 [00] 123.638713: finish_task_switch <-schedule
1143 bash-4003 [00] 123.638714: _spin_unlock_irq <-finish_task_switch
1144 bash-4003 [00] 123.638714: sub_preempt_count <-_spin_unlock_irq
1145 bash-4003 [00] 123.638715: hrtick_set <-schedule
1146 bash-4003 [00] 123.638715: _spin_lock_irqsave <-hrtick_set
1147 bash-4003 [00] 123.638716: add_preempt_count <-_spin_lock_irqsave
1148 bash-4003 [00] 123.638716: _spin_unlock_irqrestore <-hrtick_set
1149 bash-4003 [00] 123.638717: sub_preempt_count <-_spin_unlock_irqrestore
1150 bash-4003 [00] 123.638717: hrtick_clear <-hrtick_set
1151 bash-4003 [00] 123.638718: sub_preempt_count <-schedule
1152 bash-4003 [00] 123.638718: sub_preempt_count <-preempt_schedule
1153 bash-4003 [00] 123.638719: wait_for_completion <-__stop_machine_run
1154 bash-4003 [00] 123.638719: wait_for_common <-wait_for_completion
1155 bash-4003 [00] 123.638720: _spin_lock_irq <-wait_for_common
1156 bash-4003 [00] 123.638720: add_preempt_count <-_spin_lock_irq
1157 [...]
1158
1159
1160 Note: function tracer uses ring buffers to store the above
1161 entries. The newest data may overwrite the oldest data.
1162 Sometimes using echo to stop the trace is not sufficient because
1163 the tracing could have overwritten the data that you wanted to
1164 record. For this reason, it is sometimes better to disable
1165 tracing directly from a program. This allows you to stop the
1166 tracing at the point that you hit the part that you are
1167 interested in. To disable the tracing directly from a C program,
1168 something like following code snippet can be used:
1169
1170 int trace_fd;
1171 [...]
1172 int main(int argc, char *argv[]) {
1173 [...]
1174 trace_fd = open(tracing_file("tracing_enabled"), O_WRONLY);
1175 [...]
1176 if (condition_hit()) {
1177 write(trace_fd, "0", 1);
1178 }
1179 [...]
1180 }
1181
1182
1183 Single thread tracing
1184 ---------------------
1185
1186 By writing into set_ftrace_pid you can trace a
1187 single thread. For example:
1188
1189 # cat set_ftrace_pid
1190 no pid
1191 # echo 3111 > set_ftrace_pid
1192 # cat set_ftrace_pid
1193 3111
1194 # echo function > current_tracer
1195 # cat trace | head
1196 # tracer: function
1197 #
1198 # TASK-PID CPU# TIMESTAMP FUNCTION
1199 # | | | | |
1200 yum-updatesd-3111 [003] 1637.254676: finish_task_switch <-thread_return
1201 yum-updatesd-3111 [003] 1637.254681: hrtimer_cancel <-schedule_hrtimeout_range
1202 yum-updatesd-3111 [003] 1637.254682: hrtimer_try_to_cancel <-hrtimer_cancel
1203 yum-updatesd-3111 [003] 1637.254683: lock_hrtimer_base <-hrtimer_try_to_cancel
1204 yum-updatesd-3111 [003] 1637.254685: fget_light <-do_sys_poll
1205 yum-updatesd-3111 [003] 1637.254686: pipe_poll <-do_sys_poll
1206 # echo -1 > set_ftrace_pid
1207 # cat trace |head
1208 # tracer: function
1209 #
1210 # TASK-PID CPU# TIMESTAMP FUNCTION
1211 # | | | | |
1212 ##### CPU 3 buffer started ####
1213 yum-updatesd-3111 [003] 1701.957688: free_poll_entry <-poll_freewait
1214 yum-updatesd-3111 [003] 1701.957689: remove_wait_queue <-free_poll_entry
1215 yum-updatesd-3111 [003] 1701.957691: fput <-free_poll_entry
1216 yum-updatesd-3111 [003] 1701.957692: audit_syscall_exit <-sysret_audit
1217 yum-updatesd-3111 [003] 1701.957693: path_put <-audit_syscall_exit
1218
1219 If you want to trace a function when executing, you could use
1220 something like this simple program:
1221
1222 #include <stdio.h>
1223 #include <stdlib.h>
1224 #include <sys/types.h>
1225 #include <sys/stat.h>
1226 #include <fcntl.h>
1227 #include <unistd.h>
1228
1229 #define _STR(x) #x
1230 #define STR(x) _STR(x)
1231 #define MAX_PATH 256
1232
1233 const char *find_debugfs(void)
1234 {
1235 static char debugfs[MAX_PATH+1];
1236 static int debugfs_found;
1237 char type[100];
1238 FILE *fp;
1239
1240 if (debugfs_found)
1241 return debugfs;
1242
1243 if ((fp = fopen("/proc/mounts","r")) == NULL) {
1244 perror("/proc/mounts");
1245 return NULL;
1246 }
1247
1248 while (fscanf(fp, "%*s %"
1249 STR(MAX_PATH)
1250 "s %99s %*s %*d %*d\n",
1251 debugfs, type) == 2) {
1252 if (strcmp(type, "debugfs") == 0)
1253 break;
1254 }
1255 fclose(fp);
1256
1257 if (strcmp(type, "debugfs") != 0) {
1258 fprintf(stderr, "debugfs not mounted");
1259 return NULL;
1260 }
1261
1262 debugfs_found = 1;
1263
1264 return debugfs;
1265 }
1266
1267 const char *tracing_file(const char *file_name)
1268 {
1269 static char trace_file[MAX_PATH+1];
1270 snprintf(trace_file, MAX_PATH, "%s/%s", find_debugfs(), file_name);
1271 return trace_file;
1272 }
1273
1274 int main (int argc, char **argv)
1275 {
1276 if (argc < 1)
1277 exit(-1);
1278
1279 if (fork() > 0) {
1280 int fd, ffd;
1281 char line[64];
1282 int s;
1283
1284 ffd = open(tracing_file("current_tracer"), O_WRONLY);
1285 if (ffd < 0)
1286 exit(-1);
1287 write(ffd, "nop", 3);
1288
1289 fd = open(tracing_file("set_ftrace_pid"), O_WRONLY);
1290 s = sprintf(line, "%d\n", getpid());
1291 write(fd, line, s);
1292
1293 write(ffd, "function", 8);
1294
1295 close(fd);
1296 close(ffd);
1297
1298 execvp(argv[1], argv+1);
1299 }
1300
1301 return 0;
1302 }
1303
1304
1305 hw-branch-tracer (x86 only)
1306 ---------------------------
1307
1308 This tracer uses the x86 last branch tracing hardware feature to
1309 collect a branch trace on all cpus with relatively low overhead.
1310
1311 The tracer uses a fixed-size circular buffer per cpu and only
1312 traces ring 0 branches. The trace file dumps that buffer in the
1313 following format:
1314
1315 # tracer: hw-branch-tracer
1316 #
1317 # CPU# TO <- FROM
1318 0 scheduler_tick+0xb5/0x1bf <- task_tick_idle+0x5/0x6
1319 2 run_posix_cpu_timers+0x2b/0x72a <- run_posix_cpu_timers+0x25/0x72a
1320 0 scheduler_tick+0x139/0x1bf <- scheduler_tick+0xed/0x1bf
1321 0 scheduler_tick+0x17c/0x1bf <- scheduler_tick+0x148/0x1bf
1322 2 run_posix_cpu_timers+0x9e/0x72a <- run_posix_cpu_timers+0x5e/0x72a
1323 0 scheduler_tick+0x1b6/0x1bf <- scheduler_tick+0x1aa/0x1bf
1324
1325
1326 The tracer may be used to dump the trace for the oops'ing cpu on
1327 a kernel oops into the system log. To enable this,
1328 ftrace_dump_on_oops must be set. To set ftrace_dump_on_oops, one
1329 can either use the sysctl function or set it via the proc system
1330 interface.
1331
1332 sysctl kernel.ftrace_dump_on_oops=1
1333
1334 or
1335
1336 echo 1 > /proc/sys/kernel/ftrace_dump_on_oops
1337
1338
1339 Here's an example of such a dump after a null pointer
1340 dereference in a kernel module:
1341
1342 [57848.105921] BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
1343 [57848.106019] IP: [<ffffffffa0000006>] open+0x6/0x14 [oops]
1344 [57848.106019] PGD 2354e9067 PUD 2375e7067 PMD 0
1345 [57848.106019] Oops: 0002 [#1] SMP
1346 [57848.106019] last sysfs file: /sys/devices/pci0000:00/0000:00:1e.0/0000:20:05.0/local_cpus
1347 [57848.106019] Dumping ftrace buffer:
1348 [57848.106019] ---------------------------------
1349 [...]
1350 [57848.106019] 0 chrdev_open+0xe6/0x165 <- cdev_put+0x23/0x24
1351 [57848.106019] 0 chrdev_open+0x117/0x165 <- chrdev_open+0xfa/0x165
1352 [57848.106019] 0 chrdev_open+0x120/0x165 <- chrdev_open+0x11c/0x165
1353 [57848.106019] 0 chrdev_open+0x134/0x165 <- chrdev_open+0x12b/0x165
1354 [57848.106019] 0 open+0x0/0x14 [oops] <- chrdev_open+0x144/0x165
1355 [57848.106019] 0 page_fault+0x0/0x30 <- open+0x6/0x14 [oops]
1356 [57848.106019] 0 error_entry+0x0/0x5b <- page_fault+0x4/0x30
1357 [57848.106019] 0 error_kernelspace+0x0/0x31 <- error_entry+0x59/0x5b
1358 [57848.106019] 0 error_sti+0x0/0x1 <- error_kernelspace+0x2d/0x31
1359 [57848.106019] 0 page_fault+0x9/0x30 <- error_sti+0x0/0x1
1360 [57848.106019] 0 do_page_fault+0x0/0x881 <- page_fault+0x1a/0x30
1361 [...]
1362 [57848.106019] 0 do_page_fault+0x66b/0x881 <- is_prefetch+0x1ee/0x1f2
1363 [57848.106019] 0 do_page_fault+0x6e0/0x881 <- do_page_fault+0x67a/0x881
1364 [57848.106019] 0 oops_begin+0x0/0x96 <- do_page_fault+0x6e0/0x881
1365 [57848.106019] 0 trace_hw_branch_oops+0x0/0x2d <- oops_begin+0x9/0x96
1366 [...]
1367 [57848.106019] 0 ds_suspend_bts+0x2a/0xe3 <- ds_suspend_bts+0x1a/0xe3
1368 [57848.106019] ---------------------------------
1369 [57848.106019] CPU 0
1370 [57848.106019] Modules linked in: oops
1371 [57848.106019] Pid: 5542, comm: cat Tainted: G W 2.6.28 #23
1372 [57848.106019] RIP: 0010:[<ffffffffa0000006>] [<ffffffffa0000006>] open+0x6/0x14 [oops]
1373 [57848.106019] RSP: 0018:ffff880235457d48 EFLAGS: 00010246
1374 [...]
1375
1376
1377 function graph tracer
1378 ---------------------------
1379
1380 This tracer is similar to the function tracer except that it
1381 probes a function on its entry and its exit. This is done by
1382 using a dynamically allocated stack of return addresses in each
1383 task_struct. On function entry the tracer overwrites the return
1384 address of each function traced to set a custom probe. Thus the
1385 original return address is stored on the stack of return address
1386 in the task_struct.
1387
1388 Probing on both ends of a function leads to special features
1389 such as:
1390
1391 - measure of a function's time execution
1392 - having a reliable call stack to draw function calls graph
1393
1394 This tracer is useful in several situations:
1395
1396 - you want to find the reason of a strange kernel behavior and
1397 need to see what happens in detail on any areas (or specific
1398 ones).
1399
1400 - you are experiencing weird latencies but it's difficult to
1401 find its origin.
1402
1403 - you want to find quickly which path is taken by a specific
1404 function
1405
1406 - you just want to peek inside a working kernel and want to see
1407 what happens there.
1408
1409 # tracer: function_graph
1410 #
1411 # CPU DURATION FUNCTION CALLS
1412 # | | | | | | |
1413
1414 0) | sys_open() {
1415 0) | do_sys_open() {
1416 0) | getname() {
1417 0) | kmem_cache_alloc() {
1418 0) 1.382 us | __might_sleep();
1419 0) 2.478 us | }
1420 0) | strncpy_from_user() {
1421 0) | might_fault() {
1422 0) 1.389 us | __might_sleep();
1423 0) 2.553 us | }
1424 0) 3.807 us | }
1425 0) 7.876 us | }
1426 0) | alloc_fd() {
1427 0) 0.668 us | _spin_lock();
1428 0) 0.570 us | expand_files();
1429 0) 0.586 us | _spin_unlock();
1430
1431
1432 There are several columns that can be dynamically
1433 enabled/disabled. You can use every combination of options you
1434 want, depending on your needs.
1435
1436 - The cpu number on which the function executed is default
1437 enabled. It is sometimes better to only trace one cpu (see
1438 tracing_cpu_mask file) or you might sometimes see unordered
1439 function calls while cpu tracing switch.
1440
1441 hide: echo nofuncgraph-cpu > trace_options
1442 show: echo funcgraph-cpu > trace_options
1443
1444 - The duration (function's time of execution) is displayed on
1445 the closing bracket line of a function or on the same line
1446 than the current function in case of a leaf one. It is default
1447 enabled.
1448
1449 hide: echo nofuncgraph-duration > trace_options
1450 show: echo funcgraph-duration > trace_options
1451
1452 - The overhead field precedes the duration field in case of
1453 reached duration thresholds.
1454
1455 hide: echo nofuncgraph-overhead > trace_options
1456 show: echo funcgraph-overhead > trace_options
1457 depends on: funcgraph-duration
1458
1459 ie:
1460
1461 0) | up_write() {
1462 0) 0.646 us | _spin_lock_irqsave();
1463 0) 0.684 us | _spin_unlock_irqrestore();
1464 0) 3.123 us | }
1465 0) 0.548 us | fput();
1466 0) + 58.628 us | }
1467
1468 [...]
1469
1470 0) | putname() {
1471 0) | kmem_cache_free() {
1472 0) 0.518 us | __phys_addr();
1473 0) 1.757 us | }
1474 0) 2.861 us | }
1475 0) ! 115.305 us | }
1476 0) ! 116.402 us | }
1477
1478 + means that the function exceeded 10 usecs.
1479 ! means that the function exceeded 100 usecs.
1480
1481
1482 - The task/pid field displays the thread cmdline and pid which
1483 executed the function. It is default disabled.
1484
1485 hide: echo nofuncgraph-proc > trace_options
1486 show: echo funcgraph-proc > trace_options
1487
1488 ie:
1489
1490 # tracer: function_graph
1491 #
1492 # CPU TASK/PID DURATION FUNCTION CALLS
1493 # | | | | | | | | |
1494 0) sh-4802 | | d_free() {
1495 0) sh-4802 | | call_rcu() {
1496 0) sh-4802 | | __call_rcu() {
1497 0) sh-4802 | 0.616 us | rcu_process_gp_end();
1498 0) sh-4802 | 0.586 us | check_for_new_grace_period();
1499 0) sh-4802 | 2.899 us | }
1500 0) sh-4802 | 4.040 us | }
1501 0) sh-4802 | 5.151 us | }
1502 0) sh-4802 | + 49.370 us | }
1503
1504
1505 - The absolute time field is an absolute timestamp given by the
1506 system clock since it started. A snapshot of this time is
1507 given on each entry/exit of functions
1508
1509 hide: echo nofuncgraph-abstime > trace_options
1510 show: echo funcgraph-abstime > trace_options
1511
1512 ie:
1513
1514 #
1515 # TIME CPU DURATION FUNCTION CALLS
1516 # | | | | | | | |
1517 360.774522 | 1) 0.541 us | }
1518 360.774522 | 1) 4.663 us | }
1519 360.774523 | 1) 0.541 us | __wake_up_bit();
1520 360.774524 | 1) 6.796 us | }
1521 360.774524 | 1) 7.952 us | }
1522 360.774525 | 1) 9.063 us | }
1523 360.774525 | 1) 0.615 us | journal_mark_dirty();
1524 360.774527 | 1) 0.578 us | __brelse();
1525 360.774528 | 1) | reiserfs_prepare_for_journal() {
1526 360.774528 | 1) | unlock_buffer() {
1527 360.774529 | 1) | wake_up_bit() {
1528 360.774529 | 1) | bit_waitqueue() {
1529 360.774530 | 1) 0.594 us | __phys_addr();
1530
1531
1532 You can put some comments on specific functions by using
1533 trace_printk() For example, if you want to put a comment inside
1534 the __might_sleep() function, you just have to include
1535 <linux/ftrace.h> and call trace_printk() inside __might_sleep()
1536
1537 trace_printk("I'm a comment!\n")
1538
1539 will produce:
1540
1541 1) | __might_sleep() {
1542 1) | /* I'm a comment! */
1543 1) 1.449 us | }
1544
1545
1546 You might find other useful features for this tracer in the
1547 following "dynamic ftrace" section such as tracing only specific
1548 functions or tasks.
1549
1550 dynamic ftrace
1551 --------------
1552
1553 If CONFIG_DYNAMIC_FTRACE is set, the system will run with
1554 virtually no overhead when function tracing is disabled. The way
1555 this works is the mcount function call (placed at the start of
1556 every kernel function, produced by the -pg switch in gcc),
1557 starts of pointing to a simple return. (Enabling FTRACE will
1558 include the -pg switch in the compiling of the kernel.)
1559
1560 At compile time every C file object is run through the
1561 recordmcount.pl script (located in the scripts directory). This
1562 script will process the C object using objdump to find all the
1563 locations in the .text section that call mcount. (Note, only the
1564 .text section is processed, since processing other sections like
1565 .init.text may cause races due to those sections being freed).
1566
1567 A new section called "__mcount_loc" is created that holds
1568 references to all the mcount call sites in the .text section.
1569 This section is compiled back into the original object. The
1570 final linker will add all these references into a single table.
1571
1572 On boot up, before SMP is initialized, the dynamic ftrace code
1573 scans this table and updates all the locations into nops. It
1574 also records the locations, which are added to the
1575 available_filter_functions list. Modules are processed as they
1576 are loaded and before they are executed. When a module is
1577 unloaded, it also removes its functions from the ftrace function
1578 list. This is automatic in the module unload code, and the
1579 module author does not need to worry about it.
1580
1581 When tracing is enabled, kstop_machine is called to prevent
1582 races with the CPUS executing code being modified (which can
1583 cause the CPU to do undesireable things), and the nops are
1584 patched back to calls. But this time, they do not call mcount
1585 (which is just a function stub). They now call into the ftrace
1586 infrastructure.
1587
1588 One special side-effect to the recording of the functions being
1589 traced is that we can now selectively choose which functions we
1590 wish to trace and which ones we want the mcount calls to remain
1591 as nops.
1592
1593 Two files are used, one for enabling and one for disabling the
1594 tracing of specified functions. They are:
1595
1596 set_ftrace_filter
1597
1598 and
1599
1600 set_ftrace_notrace
1601
1602 A list of available functions that you can add to these files is
1603 listed in:
1604
1605 available_filter_functions
1606
1607 # cat available_filter_functions
1608 put_prev_task_idle
1609 kmem_cache_create
1610 pick_next_task_rt
1611 get_online_cpus
1612 pick_next_task_fair
1613 mutex_lock
1614 [...]
1615
1616 If I am only interested in sys_nanosleep and hrtimer_interrupt:
1617
1618 # echo sys_nanosleep hrtimer_interrupt \
1619 > set_ftrace_filter
1620 # echo ftrace > current_tracer
1621 # echo 1 > tracing_enabled
1622 # usleep 1
1623 # echo 0 > tracing_enabled
1624 # cat trace
1625 # tracer: ftrace
1626 #
1627 # TASK-PID CPU# TIMESTAMP FUNCTION
1628 # | | | | |
1629 usleep-4134 [00] 1317.070017: hrtimer_interrupt <-smp_apic_timer_interrupt
1630 usleep-4134 [00] 1317.070111: sys_nanosleep <-syscall_call
1631 <idle>-0 [00] 1317.070115: hrtimer_interrupt <-smp_apic_timer_interrupt
1632
1633 To see which functions are being traced, you can cat the file:
1634
1635 # cat set_ftrace_filter
1636 hrtimer_interrupt
1637 sys_nanosleep
1638
1639
1640 Perhaps this is not enough. The filters also allow simple wild
1641 cards. Only the following are currently available
1642
1643 <match>* - will match functions that begin with <match>
1644 *<match> - will match functions that end with <match>
1645 *<match>* - will match functions that have <match> in it
1646
1647 These are the only wild cards which are supported.
1648
1649 <match>*<match> will not work.
1650
1651 Note: It is better to use quotes to enclose the wild cards,
1652 otherwise the shell may expand the parameters into names
1653 of files in the local directory.
1654
1655 # echo 'hrtimer_*' > set_ftrace_filter
1656
1657 Produces:
1658
1659 # tracer: ftrace
1660 #
1661 # TASK-PID CPU# TIMESTAMP FUNCTION
1662 # | | | | |
1663 bash-4003 [00] 1480.611794: hrtimer_init <-copy_process
1664 bash-4003 [00] 1480.611941: hrtimer_start <-hrtick_set
1665 bash-4003 [00] 1480.611956: hrtimer_cancel <-hrtick_clear
1666 bash-4003 [00] 1480.611956: hrtimer_try_to_cancel <-hrtimer_cancel
1667 <idle>-0 [00] 1480.612019: hrtimer_get_next_event <-get_next_timer_interrupt
1668 <idle>-0 [00] 1480.612025: hrtimer_get_next_event <-get_next_timer_interrupt
1669 <idle>-0 [00] 1480.612032: hrtimer_get_next_event <-get_next_timer_interrupt
1670 <idle>-0 [00] 1480.612037: hrtimer_get_next_event <-get_next_timer_interrupt
1671 <idle>-0 [00] 1480.612382: hrtimer_get_next_event <-get_next_timer_interrupt
1672
1673
1674 Notice that we lost the sys_nanosleep.
1675
1676 # cat set_ftrace_filter
1677 hrtimer_run_queues
1678 hrtimer_run_pending
1679 hrtimer_init
1680 hrtimer_cancel
1681 hrtimer_try_to_cancel
1682 hrtimer_forward
1683 hrtimer_start
1684 hrtimer_reprogram
1685 hrtimer_force_reprogram
1686 hrtimer_get_next_event
1687 hrtimer_interrupt
1688 hrtimer_nanosleep
1689 hrtimer_wakeup
1690 hrtimer_get_remaining
1691 hrtimer_get_res
1692 hrtimer_init_sleeper
1693
1694
1695 This is because the '>' and '>>' act just like they do in bash.
1696 To rewrite the filters, use '>'
1697 To append to the filters, use '>>'
1698
1699 To clear out a filter so that all functions will be recorded
1700 again:
1701
1702 # echo > set_ftrace_filter
1703 # cat set_ftrace_filter
1704 #
1705
1706 Again, now we want to append.
1707
1708 # echo sys_nanosleep > set_ftrace_filter
1709 # cat set_ftrace_filter
1710 sys_nanosleep
1711 # echo 'hrtimer_*' >> set_ftrace_filter
1712 # cat set_ftrace_filter
1713 hrtimer_run_queues
1714 hrtimer_run_pending
1715 hrtimer_init
1716 hrtimer_cancel
1717 hrtimer_try_to_cancel
1718 hrtimer_forward
1719 hrtimer_start
1720 hrtimer_reprogram
1721 hrtimer_force_reprogram
1722 hrtimer_get_next_event
1723 hrtimer_interrupt
1724 sys_nanosleep
1725 hrtimer_nanosleep
1726 hrtimer_wakeup
1727 hrtimer_get_remaining
1728 hrtimer_get_res
1729 hrtimer_init_sleeper
1730
1731
1732 The set_ftrace_notrace prevents those functions from being
1733 traced.
1734
1735 # echo '*preempt*' '*lock*' > set_ftrace_notrace
1736
1737 Produces:
1738
1739 # tracer: ftrace
1740 #
1741 # TASK-PID CPU# TIMESTAMP FUNCTION
1742 # | | | | |
1743 bash-4043 [01] 115.281644: finish_task_switch <-schedule
1744 bash-4043 [01] 115.281645: hrtick_set <-schedule
1745 bash-4043 [01] 115.281645: hrtick_clear <-hrtick_set
1746 bash-4043 [01] 115.281646: wait_for_completion <-__stop_machine_run
1747 bash-4043 [01] 115.281647: wait_for_common <-wait_for_completion
1748 bash-4043 [01] 115.281647: kthread_stop <-stop_machine_run
1749 bash-4043 [01] 115.281648: init_waitqueue_head <-kthread_stop
1750 bash-4043 [01] 115.281648: wake_up_process <-kthread_stop
1751 bash-4043 [01] 115.281649: try_to_wake_up <-wake_up_process
1752
1753 We can see that there's no more lock or preempt tracing.
1754
1755
1756 Dynamic ftrace with the function graph tracer
1757 ---------------------------------------------
1758
1759 Although what has been explained above concerns both the
1760 function tracer and the function-graph-tracer, there are some
1761 special features only available in the function-graph tracer.
1762
1763 If you want to trace only one function and all of its children,
1764 you just have to echo its name into set_graph_function:
1765
1766 echo __do_fault > set_graph_function
1767
1768 will produce the following "expanded" trace of the __do_fault()
1769 function:
1770
1771 0) | __do_fault() {
1772 0) | filemap_fault() {
1773 0) | find_lock_page() {
1774 0) 0.804 us | find_get_page();
1775 0) | __might_sleep() {
1776 0) 1.329 us | }
1777 0) 3.904 us | }
1778 0) 4.979 us | }
1779 0) 0.653 us | _spin_lock();
1780 0) 0.578 us | page_add_file_rmap();
1781 0) 0.525 us | native_set_pte_at();
1782 0) 0.585 us | _spin_unlock();
1783 0) | unlock_page() {
1784 0) 0.541 us | page_waitqueue();
1785 0) 0.639 us | __wake_up_bit();
1786 0) 2.786 us | }
1787 0) + 14.237 us | }
1788 0) | __do_fault() {
1789 0) | filemap_fault() {
1790 0) | find_lock_page() {
1791 0) 0.698 us | find_get_page();
1792 0) | __might_sleep() {
1793 0) 1.412 us | }
1794 0) 3.950 us | }
1795 0) 5.098 us | }
1796 0) 0.631 us | _spin_lock();
1797 0) 0.571 us | page_add_file_rmap();
1798 0) 0.526 us | native_set_pte_at();
1799 0) 0.586 us | _spin_unlock();
1800 0) | unlock_page() {
1801 0) 0.533 us | page_waitqueue();
1802 0) 0.638 us | __wake_up_bit();
1803 0) 2.793 us | }
1804 0) + 14.012 us | }
1805
1806 You can also expand several functions at once:
1807
1808 echo sys_open > set_graph_function
1809 echo sys_close >> set_graph_function
1810
1811 Now if you want to go back to trace all functions you can clear
1812 this special filter via:
1813
1814 echo > set_graph_function
1815
1816
1817 trace_pipe
1818 ----------
1819
1820 The trace_pipe outputs the same content as the trace file, but
1821 the effect on the tracing is different. Every read from
1822 trace_pipe is consumed. This means that subsequent reads will be
1823 different. The trace is live.
1824
1825 # echo function > current_tracer
1826 # cat trace_pipe > /tmp/trace.out &
1827 [1] 4153
1828 # echo 1 > tracing_enabled
1829 # usleep 1
1830 # echo 0 > tracing_enabled
1831 # cat trace
1832 # tracer: function
1833 #
1834 # TASK-PID CPU# TIMESTAMP FUNCTION
1835 # | | | | |
1836
1837 #
1838 # cat /tmp/trace.out
1839 bash-4043 [00] 41.267106: finish_task_switch <-schedule
1840 bash-4043 [00] 41.267106: hrtick_set <-schedule
1841 bash-4043 [00] 41.267107: hrtick_clear <-hrtick_set
1842 bash-4043 [00] 41.267108: wait_for_completion <-__stop_machine_run
1843 bash-4043 [00] 41.267108: wait_for_common <-wait_for_completion
1844 bash-4043 [00] 41.267109: kthread_stop <-stop_machine_run
1845 bash-4043 [00] 41.267109: init_waitqueue_head <-kthread_stop
1846 bash-4043 [00] 41.267110: wake_up_process <-kthread_stop
1847 bash-4043 [00] 41.267110: try_to_wake_up <-wake_up_process
1848 bash-4043 [00] 41.267111: select_task_rq_rt <-try_to_wake_up
1849
1850
1851 Note, reading the trace_pipe file will block until more input is
1852 added. By changing the tracer, trace_pipe will issue an EOF. We
1853 needed to set the function tracer _before_ we "cat" the
1854 trace_pipe file.
1855
1856
1857 trace entries
1858 -------------
1859
1860 Having too much or not enough data can be troublesome in
1861 diagnosing an issue in the kernel. The file buffer_size_kb is
1862 used to modify the size of the internal trace buffers. The
1863 number listed is the number of entries that can be recorded per
1864 CPU. To know the full size, multiply the number of possible CPUS
1865 with the number of entries.
1866
1867 # cat buffer_size_kb
1868 1408 (units kilobytes)
1869
1870 Note, to modify this, you must have tracing completely disabled.
1871 To do that, echo "nop" into the current_tracer. If the
1872 current_tracer is not set to "nop", an EINVAL error will be
1873 returned.
1874
1875 # echo nop > current_tracer
1876 # echo 10000 > buffer_size_kb
1877 # cat buffer_size_kb
1878 10000 (units kilobytes)
1879
1880 The number of pages which will be allocated is limited to a
1881 percentage of available memory. Allocating too much will produce
1882 an error.
1883
1884 # echo 1000000000000 > buffer_size_kb
1885 -bash: echo: write error: Cannot allocate memory
1886 # cat buffer_size_kb
1887 85
1888
1889 -----------
1890
1891 More details can be found in the source code, in the
1892 kernel/trace/*.c files.
This page took 0.069529 seconds and 6 git commands to generate.