Tue Feb 3 14:25:25 1998 Brent Baccala <baccala@freesoft.org>
[deliverable/binutils-gdb.git] / gprof / gprof.texi
CommitLineData
be4e1cd5
JO
1\input texinfo @c -*-texinfo-*-
2@setfilename gprof.info
3@settitle GNU gprof
4@setchapternewpage odd
44c8c1d5
DZ
5
6@ifinfo
7@c This is a dir.info fragment to support semi-automated addition of
8@c manuals to an info tree. zoo@cygnus.com is developing this facility.
9@format
10START-INFO-DIR-ENTRY
5ee3dd17 11* gprof: (gprof). Profiling your program's execution
44c8c1d5
DZ
12END-INFO-DIR-ENTRY
13@end format
14@end ifinfo
15
be4e1cd5
JO
16@ifinfo
17This file documents the gprof profiler of the GNU system.
18
e2fd4231 19Copyright (C) 1988, 1992, 1997, 1998 Free Software Foundation, Inc.
be4e1cd5
JO
20
21Permission is granted to make and distribute verbatim copies of
22this manual provided the copyright notice and this permission notice
23are preserved on all copies.
24
25@ignore
26Permission is granted to process this file through Tex and print the
27results, provided the printed document carries copying permission
28notice identical to this one except for the removal of this paragraph
29(this paragraph not being relevant to the printed manual).
30
31@end ignore
32Permission is granted to copy and distribute modified versions of this
33manual under the conditions for verbatim copying, provided that the entire
34resulting derived work is distributed under the terms of a permission
35notice identical to this one.
36
37Permission is granted to copy and distribute translations of this manual
38into another language, under the above conditions for modified versions.
39@end ifinfo
40
41@finalout
42@smallbook
43
44@titlepage
45@title GNU gprof
46@subtitle The @sc{gnu} Profiler
47@author Jay Fenlason and Richard Stallman
48
49@page
50
51This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
52can use it to determine which parts of a program are taking most of the
53execution time. We assume that you know how to write, compile, and
54execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
55
e2fd4231
ILT
56This manual was edited January 1993 by Jeffrey Osier
57and updated September 1997 by Brent Baccala.
be4e1cd5
JO
58
59@vskip 0pt plus 1filll
e2fd4231 60Copyright @copyright{} 1988, 1992, 1997, 1998 Free Software Foundation, Inc.
be4e1cd5
JO
61
62Permission is granted to make and distribute verbatim copies of
63this manual provided the copyright notice and this permission notice
64are preserved on all copies.
65
66@ignore
67Permission is granted to process this file through TeX and print the
68results, provided the printed document carries copying permission
69notice identical to this one except for the removal of this paragraph
70(this paragraph not being relevant to the printed manual).
71
72@end ignore
73Permission is granted to copy and distribute modified versions of this
74manual under the conditions for verbatim copying, provided that the entire
75resulting derived work is distributed under the terms of a permission
76notice identical to this one.
77
78Permission is granted to copy and distribute translations of this manual
79into another language, under the same conditions as for modified versions.
80
81@end titlepage
82
83@ifinfo
84@node Top
85@top Profiling a Program: Where Does It Spend Its Time?
86
87This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
88can use it to determine which parts of a program are taking most of the
89execution time. We assume that you know how to write, compile, and
90execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
91
e2fd4231 92This manual was updated August 1997 by Brent Baccala.
a2b34707 93
be4e1cd5 94@menu
e2fd4231 95* Introduction:: What profiling means, and why it is useful.
be4e1cd5 96
e2fd4231
ILT
97* Compiling:: How to compile your program for profiling.
98* Executing:: Executing your program to generate profile data
99* Invoking:: How to run @code{gprof}, and its options
be4e1cd5 100
e2fd4231 101* Output:: Interpreting @code{gprof}'s output
be4e1cd5 102
e2fd4231
ILT
103* Inaccuracy:: Potential problems you should be aware of
104* How do I?:: Answers to common questions
105* Incompatibilities:: (between @sc{gnu} @code{gprof} and Unix @code{gprof}.)
106* Details:: Details of how profiling is done
be4e1cd5
JO
107@end menu
108@end ifinfo
109
e2fd4231
ILT
110@node Introduction
111@chapter Introduction to Profiling
be4e1cd5
JO
112
113Profiling allows you to learn where your program spent its time and which
114functions called which other functions while it was executing. This
115information can show you which pieces of your program are slower than you
116expected, and might be candidates for rewriting to make your program
117execute faster. It can also tell you which functions are being called more
118or less often than you expected. This may help you spot bugs that had
119otherwise been unnoticed.
120
121Since the profiler uses information collected during the actual execution
122of your program, it can be used on programs that are too large or too
123complex to analyze by reading the source. However, how your program is run
124will affect the information that shows up in the profile data. If you
125don't use some feature of your program while it is being profiled, no
126profile information will be generated for that feature.
127
128Profiling has several steps:
129
130@itemize @bullet
131@item
132You must compile and link your program with profiling enabled.
133@xref{Compiling}.
134
135@item
136You must execute your program to generate a profile data file.
137@xref{Executing}.
138
139@item
140You must run @code{gprof} to analyze the profile data.
141@xref{Invoking}.
142@end itemize
143
144The next three chapters explain these steps in greater detail.
145
e2fd4231 146Several forms of output are available from the analysis.
be4e1cd5 147
e2fd4231 148The @dfn{flat profile} shows how much time your program spent in each function,
be4e1cd5
JO
149and how many times that function was called. If you simply want to know
150which functions burn most of the cycles, it is stated concisely here.
151@xref{Flat Profile}.
152
e2fd4231 153The @dfn{call graph} shows, for each function, which functions called it, which
be4e1cd5
JO
154other functions it called, and how many times. There is also an estimate
155of how much time was spent in the subroutines of each function. This can
156suggest places where you might try to eliminate function calls that use a
157lot of time. @xref{Call Graph}.
158
e2fd4231
ILT
159The @dfn{annotated source} listing is a copy of the program's
160source code, labeled with the number of times each line of the
161program was executed. @xref{Annotated Source}.
162
163To better understand how profiling works, you may wish to read
164a description of its implementation.
165@xref{Implementation}.
166
be4e1cd5
JO
167@node Compiling
168@chapter Compiling a Program for Profiling
169
170The first step in generating profile information for your program is
171to compile and link it with profiling enabled.
172
173To compile a source file for profiling, specify the @samp{-pg} option when
174you run the compiler. (This is in addition to the options you normally
175use.)
176
177To link the program for profiling, if you use a compiler such as @code{cc}
178to do the linking, simply specify @samp{-pg} in addition to your usual
179options. The same option, @samp{-pg}, alters either compilation or linking
180to do what is necessary for profiling. Here are examples:
181
182@example
183cc -g -c myprog.c utils.c -pg
184cc -o myprog myprog.o utils.o -pg
185@end example
186
187The @samp{-pg} option also works with a command that both compiles and links:
188
189@example
190cc -o myprog myprog.c utils.c -g -pg
191@end example
192
193If you run the linker @code{ld} directly instead of through a compiler
e2fd4231
ILT
194such as @code{cc}, you may have to specify a profiling startup file
195@file{gcrt0.o} as the first input file instead of the usual startup
196file @file{crt0.o}. In addition, you would probably want to
197specify the profiling C library, @file{libc_p.a}, by writing
be4e1cd5
JO
198@samp{-lc_p} instead of the usual @samp{-lc}. This is not absolutely
199necessary, but doing this gives you number-of-calls information for
200standard library functions such as @code{read} and @code{open}. For
201example:
202
203@example
204ld -o myprog /lib/gcrt0.o myprog.o utils.o -lc_p
205@end example
206
207If you compile only some of the modules of the program with @samp{-pg}, you
208can still profile the program, but you won't get complete information about
209the modules that were compiled without @samp{-pg}. The only information
210you get for the functions in those modules is the total time spent in them;
211there is no record of how many times they were called, or from where. This
212will not affect the flat profile (except that the @code{calls} field for
213the functions will be blank), but will greatly reduce the usefulness of the
214call graph.
215
e2fd4231
ILT
216If you wish to perform line-by-line profiling,
217you will also need to specify the @samp{-g} option,
218instructing the compiler to insert debugging symbols into the program
219that match program addresses to source code lines.
220@xref{Line-by-line}.
221
222In addition to the @samp{-pg} and @samp{-g} options,
223you may also wish to specify the @samp{-a} option when compiling.
224This will instrument
225the program to perform basic-block counting. As the program runs,
226it will count how many times it executed each branch of each @samp{if}
227statement, each iteration of each @samp{do} loop, etc. This will
228enable @code{gprof} to construct an annotated source code
229listing showing how many times each line of code was executed.
230
be4e1cd5 231@node Executing
e2fd4231 232@chapter Executing the Program
be4e1cd5
JO
233
234Once the program is compiled for profiling, you must run it in order to
235generate the information that @code{gprof} needs. Simply run the program
236as usual, using the normal arguments, file names, etc. The program should
237run normally, producing the same output as usual. It will, however, run
238somewhat slower than normal because of the time spent collecting and the
239writing the profile data.
240
241The way you run the program---the arguments and input that you give
242it---may have a dramatic effect on what the profile information shows. The
243profile data will describe the parts of the program that were activated for
244the particular input you use. For example, if the first command you give
245to your program is to quit, the profile data will show the time used in
246initialization and in cleanup, but not much else.
247
e2fd4231 248Your program will write the profile data into a file called @file{gmon.out}
be4e1cd5
JO
249just before exiting. If there is already a file called @file{gmon.out},
250its contents are overwritten. There is currently no way to tell the
251program to write the profile data under a different name, but you can rename
252the file afterward if you are concerned that it may be overwritten.
253
254In order to write the @file{gmon.out} file properly, your program must exit
255normally: by returning from @code{main} or by calling @code{exit}. Calling
256the low-level function @code{_exit} does not write the profile data, and
257neither does abnormal termination due to an unhandled signal.
258
259The @file{gmon.out} file is written in the program's @emph{current working
260directory} at the time it exits. This means that if your program calls
261@code{chdir}, the @file{gmon.out} file will be left in the last directory
262your program @code{chdir}'d to. If you don't have permission to write in
e2fd4231
ILT
263this directory, the file is not written, and you will get an error message.
264
265Older versions of the @sc{gnu} profiling library may also write a file
266called @file{bb.out}. This file, if present, contains an human-readable
267listing of the basic-block execution counts. Unfortunately, the
268appearance of a human-readable @file{bb.out} means the basic-block
269counts didn't get written into @file{gmon.out}.
270The Perl script @code{bbconv.pl}, included with the @code{gprof}
271source distribution, will convert a @file{bb.out} file into
272a format readable by @code{gprof}.
be4e1cd5
JO
273
274@node Invoking
275@chapter @code{gprof} Command Summary
276
277After you have a profile data file @file{gmon.out}, you can run @code{gprof}
278to interpret the information in it. The @code{gprof} program prints a
279flat profile and a call graph on standard output. Typically you would
280redirect the output of @code{gprof} into a file with @samp{>}.
281
282You run @code{gprof} like this:
283
284@smallexample
285gprof @var{options} [@var{executable-file} [@var{profile-data-files}@dots{}]] [> @var{outfile}]
286@end smallexample
287
288@noindent
289Here square-brackets indicate optional arguments.
290
291If you omit the executable file name, the file @file{a.out} is used. If
292you give no profile data file name, the file @file{gmon.out} is used. If
293any file is not in the proper format, or if the profile data file does not
294appear to belong to the executable file, an error message is printed.
295
296You can give more than one profile data file by entering all their names
297after the executable file name; then the statistics in all the data files
298are summed together.
299
e2fd4231
ILT
300The order of these options does not matter.
301
302@menu
303* Output Options:: Controlling @code{gprof}'s output style
304* Analysis Options:: Controlling how @code{gprof} analyses its data
305* Miscellaneous Options::
306* Depricated Options:: Options you no longer need to use, but which
307 have been retained for compatibility
308* Symspecs:: Specifying functions to include or exclude
309@end menu
310
311@node Output Options,Analysis Options,,Invoking
312@section Output Options
313
314These options specify which of several output formats
315@code{gprof} should produce.
316
317Many of these options take an optional @dfn{symspec} to specify
318functions to be included or excluded. These options can be
319specified multiple times, with different symspecs, to include
320or exclude sets of symbols. @xref{Symspecs}.
321
322Specifying any of these options overrides the default (@samp{-p -q}),
323which prints a flat profile and call graph analysis
324for all functions.
be4e1cd5
JO
325
326@table @code
e2fd4231
ILT
327
328@item -A[@var{symspec}]
329@itemx --annotated-source[=@var{symspec}]
330The @samp{-A} option causes @code{gprof} to print annotated source code.
331If @var{symspec} is specified, print output only for matching symbols.
332@xref{Annotated Source}.
333
334@item -b
335@itemx --brief
336If the @samp{-b} option is given, @code{gprof} doesn't print the
337verbose blurbs that try to explain the meaning of all of the fields in
338the tables. This is useful if you intend to print out the output, or
339are tired of seeing the blurbs.
340
341@item -C[@var{symspec}]
342@itemx --exec-counts[=@var{symspec}]
343The @samp{-C} option causes @code{gprof} to
344print a tally of functions and the number of times each was called.
345If @var{symspec} is specified, print tally only for matching symbols.
346
347If the profile data file contains basic-block count records, specifing
348the @samp{-l} option, along with @samp{-C}, will cause basic-block
349execution counts to be tallied and displayed.
350
351@item -i
352@itemx --file-info
353The @samp{-i} option causes @code{gprof} to display summary information
354about the profile data file(s) and then exit. The number of histogram,
355call graph, and basic-block count records is displayed.
356
357@item -I @var{dirs}
358@itemx --directory-path=@var{dirs}
359The @samp{-I} option specifies a list of search directories in
360which to find source files. Environment variable @var{GPROF_PATH}
361can also be used to convery this information.
362Used mostly for annotated source output.
363
364@item -J[@var{symspec}]
365@itemx --no-annotated-source[=@var{symspec}]
366The @samp{-J} option causes @code{gprof} not to
367print annotated source code.
368If @var{symspec} is specified, @code{gprof} prints annotated source,
369but excludes matching symbols.
370
371@item -L
372@itemx --print-path
373Normally, source filenames are printed with the path
374component suppressed. The @samp{-L} option causes @code{gprof}
375to print the full pathname of
376source filenames, which is determined
377from symbolic debugging information in the image file
378and is relative to the directory in which the compiler
379was invoked.
380
381@item -p[@var{symspec}]
382@itemx --flat-profile[=@var{symspec}]
383The @samp{-p} option causes @code{gprof} to print a flat profile.
384If @var{symspec} is specified, print flat profile only for matching symbols.
385@xref{Flat Profile}.
386
387@item -P[@var{symspec}]
388@itemx --no-flat-profile[=@var{symspec}]
389The @samp{-P} option causes @code{gprof} to suppress printing a flat profile.
390If @var{symspec} is specified, @code{gprof} prints a flat profile,
391but excludes matching symbols.
392
393@item -q[@var{symspec}]
394@itemx --graph[=@var{symspec}]
395The @samp{-q} option causes @code{gprof} to print the call graph analysis.
396If @var{symspec} is specified, print call graph only for matching symbols
397and their children.
398@xref{Call Graph}.
399
400@item -Q[@var{symspec}]
401@itemx --no-graph[=@var{symspec}]
402The @samp{-Q} option causes @code{gprof} to suppress printing the
403call graph.
404If @var{symspec} is specified, @code{gprof} prints a call graph,
405but excludes matching symbols.
406
407@item -y
408@itemx --separate-files
409This option affects annotated source output only.
410Normally, gprof prints annotated source files
411to standard-output. If this option is specified,
412annotated source for a file named @file{path/filename}
413is generated in the file @file{filename-ann}.
414
415@item -Z[@var{symspec}]
416@itemx --no-exec-counts[=@var{symspec}]
417The @samp{-Z} option causes @code{gprof} not to
418print a tally of functions and the number of times each was called.
419If @var{symspec} is specified, print tally, but exclude matching symbols.
420
421@item --function-ordering
422The @samp{--function-ordering} option causes @code{gprof} to print a
423suggested function ordering for the program based on profiling data.
424This option suggests an ordering which may improve paging, tlb and
425cache behavior for the program on systems which support arbitrary
426ordering of functions in an executable.
427
428The exact details of how to force the linker to place functions
429in a particular order is system dependent and out of the scope of this
430manual.
431
432@item --file-ordering @var{map_file}
433The @samp{--file-ordering} option causes @code{gprof} to print a
434suggested .o link line ordering for the program based on profiling data.
435This option suggests an ordering which may improve paging, tlb and
436cache behavior for the program on systems which do not support arbitrary
437ordering of functions in an executable.
438
439Use of the @samp{-a} argument is highly recommended with this option.
440
441The @var{map_file} argument is a pathname to a file which provides
442function name to object file mappings. The format of the file is similar to
443the output of the program @code{nm}.
444
445@smallexample
446@group
447c-parse.o:00000000 T yyparse
448c-parse.o:00000004 C yyerrflag
449c-lang.o:00000000 T maybe_objc_method_name
450c-lang.o:00000000 T print_lang_statistics
451c-lang.o:00000000 T recognize_objc_keyword
452c-decl.o:00000000 T print_lang_identifier
453c-decl.o:00000000 T print_lang_type
454@dots{}
455
456@end group
457@end smallexample
458
459GNU @code{nm} @samp{--extern-only} @samp{--defined-only} @samp{-v} @samp{--print-file-name} can be used to create @var{map_file}.
460
461@item -T
462@itemx --traditional
463The @samp{-T} option causes @code{gprof} to print its output in
464``traditional'' BSD style.
465
466@item -w @var{width}
467@itemx --width=@var{width}
468Sets width of output lines to @var{width}.
469Currently only used when printing the function index at the bottom
470of the call graph.
471
472@item -x
473@itemx --all-lines
474This option affects annotated source output only.
475By default, only the lines at the beginning of a basic-block
476are annotated. If this option is specified, every line in
477a basic-block is annotated by repeating the annotation for the
478first line. This behavior is similar to @code{tcov}'s @samp{-a}.
479
480@end table
481
482@node Analysis Options,Miscellaneous Options,Output Options,Invoking
483@section Analysis Options
484
485@table @code
486
be4e1cd5 487@item -a
e2fd4231 488@itemx --no-static
be4e1cd5
JO
489The @samp{-a} option causes @code{gprof} to suppress the printing of
490statically declared (private) functions. (These are functions whose
491names are not listed as global, and which are not visible outside the
492file/function/block where they were defined.) Time spent in these
493functions, calls to/from them, etc, will all be attributed to the
494function that was loaded directly before it in the executable file.
495@c This is compatible with Unix @code{gprof}, but a bad idea.
496This option affects both the flat profile and the call graph.
497
e2fd4231
ILT
498@item -c
499@itemx --static-call-graph
500The @samp{-c} option causes the call graph of the program to be
501augmented by a heuristic which examines the text space of the object
502file and identifies function calls in the binary machine code.
503Since normal call graph records are only generated when functions are
504entered, this option identifies children that could have been called,
505but never were. Calls to functions that were not compiled with
506profiling enabled are also identified, but only if symbol table
507entries are present for them.
508Calls to dynamic library routines are typically @emph{not} found
509by this option.
510Parents or children identified via this heuristic
511are indicated in the call graph with call counts of @samp{0}.
512
32843f94 513@item -D
e2fd4231 514@itemx --ignore-non-functions
32843f94
JL
515The @samp{-D} option causes @code{gprof} to ignore symbols which
516are not known to be functions. This option will give more accurate
517profile data on systems where it is supported (Solaris and HPUX for
518example).
519
e2fd4231
ILT
520@item -k @var{from}/@var{to}
521The @samp{-k} option allows you to delete from the call graph any arcs from
522symbols matching symspec @var{from} to those matching symspec @var{to}.
523
524@item -l
525@itemx --line
526The @samp{-l} option enables line-by-line profiling, which causes
527histogram hits to be charged to individual source code lines,
528instead of functions.
529If the program was compiled with basic-block counting enabled,
530this option will also identify how many times each line of
531code was executed.
532While line-by-line profiling can help isolate where in a large function
533a program is spending its time, it also significantly increases
534the running time of @code{gprof}, and magnifies statistical
535inaccuracies.
536@xref{Sampling Error}.
537
538@item -m @var{num}
539@itemx --min-count=@var{num}
540This option affects execution count output only.
541Symbols that are executed less than @var{num} times are suppressed.
542
543@item -n[@var{symspec}]
544@itemx --time[=@var{symspec}]
545The @samp{-n} option causes @code{gprof}, in its call graph analysis,
546to only propagate times for symbols matching @var{symspec}.
547
548@item -N[@var{symspec}]
549@itemx --no-time[=@var{symspec}]
550The @samp{-n} option causes @code{gprof}, in its call graph analysis,
551not to propagate times for symbols matching @var{symspec}.
552
553@item -z
554@itemx --display-unused-functions
555If you give the @samp{-z} option, @code{gprof} will mention all
556functions in the flat profile, even those that were never called, and
557that had no time spent in them. This is useful in conjunction with the
558@samp{-c} option for discovering which routines were never called.
559
560@end table
561
562@node Miscellaneous Options,Depricated Options,Analysis Options,Invoking
563@section Miscellaneous Options
564
565@table @code
566
567@item -d[@var{num}]
568@itemx --debug[=@var{num}]
569The @samp{-d @var{num}} option specifies debugging options.
570If @var{num} is not specified, enable all debugging.
571@xref{Debugging}.
572
573@item -O@var{name}
574@itemx --file-format=@var{name}
575Selects the format of the profile data files.
576Recognized formats are @samp{auto} (the default), @samp{bsd}, @samp{magic},
577and @samp{prof} (not yet supported).
578
579@item -s
580@itemx --sum
581The @samp{-s} option causes @code{gprof} to summarize the information
582in the profile data files it read in, and write out a profile data
583file called @file{gmon.sum}, which contains all the information from
584the profile data files that @code{gprof} read in. The file @file{gmon.sum}
585may be one of the specified input files; the effect of this is to
586merge the data in the other input files into @file{gmon.sum}.
587
588Eventually you can run @code{gprof} again without @samp{-s} to analyze the
589cumulative data in the file @file{gmon.sum}.
590
591@item -v
592@itemx --version
593The @samp{-v} flag causes @code{gprof} to print the current version
594number, and then exit.
595
596@end table
597
598@node Depricated Options,Symspecs,Miscellaneous Options,Invoking
599@section Depricated Options
600
601@table @code
602
603These options have been replaced with newer versions that use symspecs.
604
be4e1cd5
JO
605@item -e @var{function_name}
606The @samp{-e @var{function}} option tells @code{gprof} to not print
607information about the function @var{function_name} (and its
608children@dots{}) in the call graph. The function will still be listed
609as a child of any functions that call it, but its index number will be
610shown as @samp{[not printed]}. More than one @samp{-e} option may be
611given; only one @var{function_name} may be indicated with each @samp{-e}
612option.
613
614@item -E @var{function_name}
615The @code{-E @var{function}} option works like the @code{-e} option, but
616time spent in the function (and children who were not called from
617anywhere else), will not be used to compute the percentages-of-time for
618the call graph. More than one @samp{-E} option may be given; only one
619@var{function_name} may be indicated with each @samp{-E} option.
620
621@item -f @var{function_name}
622The @samp{-f @var{function}} option causes @code{gprof} to limit the
623call graph to the function @var{function_name} and its children (and
624their children@dots{}). More than one @samp{-f} option may be given;
625only one @var{function_name} may be indicated with each @samp{-f}
626option.
627
628@item -F @var{function_name}
629The @samp{-F @var{function}} option works like the @code{-f} option, but
630only time spent in the function and its children (and their
631children@dots{}) will be used to determine total-time and
632percentages-of-time for the call graph. More than one @samp{-F} option
633may be given; only one @var{function_name} may be indicated with each
634@samp{-F} option. The @samp{-F} option overrides the @samp{-E} option.
635
be4e1cd5
JO
636@end table
637
be4e1cd5
JO
638Note that only one function can be specified with each @code{-e},
639@code{-E}, @code{-f} or @code{-F} option. To specify more than one
640function, use multiple options. For example, this command:
641
642@example
643gprof -e boring -f foo -f bar myprogram > gprof.output
644@end example
645
646@noindent
647lists in the call graph all functions that were reached from either
648@code{foo} or @code{bar} and were not reachable from @code{boring}.
649
e2fd4231
ILT
650@node Symspecs,,Depricated Options,Invoking
651@section Symspecs
be4e1cd5 652
e2fd4231
ILT
653Many of the output options allow functions to be included or excluded
654using @dfn{symspecs} (symbol specifications), which observe the
655following syntax:
64c50fc5 656
e2fd4231
ILT
657@example
658 filename_containing_a_dot
659| funcname_not_containing_a_dot
660| linenumber
661| ( [ any_filename ] `:' ( any_funcname | linenumber ) )
662@end example
64c50fc5 663
e2fd4231 664Here are some sample symspecs:
64c50fc5 665
e2fd4231
ILT
666@table @code
667@item main.c
668Selects everything in file "main.c"---the
669dot in the string tells gprof to interpret
670the string as a filename, rather than as
671a function name. To select a file whose
672name does not contain a dot, a trailing colon
673should be specified. For example, "odd:" is
674interpreted as the file named "odd".
675
676@item main
677Selects all functions named "main". Notice
678that there may be multiple instances of the
679same function name because some of the
680definitions may be local (i.e., static).
681Unless a function name is unique in a program,
682you must use the colon notation explained
683below to specify a function from a specific
684source file. Sometimes, function names contain
685dots. In such cases, it is necessar to
686add a leading colon to the name. For example,
687":.mul" selects function ".mul".
688
689@item main.c:main
690Selects function "main" in file "main.c".
691
692@item main.c:134
693Selects line 134 in file "main.c".
694@end table
64c50fc5 695
e2fd4231
ILT
696@node Output
697@chapter Interpreting @code{gprof}'s Output
64c50fc5 698
e2fd4231
ILT
699@code{gprof} can produce several different output styles, the
700most important of which are described below. The simplest output
701styles (file information, execution count, and function and file ordering)
702are not described here, but are documented with the respective options
703that trigger them.
704@xref{Output Options}.
64c50fc5 705
e2fd4231
ILT
706@menu
707* Flat Profile:: The flat profile shows how much time was spent
708 executing directly in each function.
709* Call Graph:: The call graph shows which functions called which
710 others, and how much time each function used
711 when its subroutine calls are included.
712* Line-by-line:: @code{gprof} can analyze individual source code lines
713* Annotated Source:: The annotated source listing displays source code
714 labeled with execution counts
715@end menu
64c50fc5 716
be4e1cd5 717
e2fd4231
ILT
718@node Flat Profile,Call Graph,,Output
719@section The Flat Profile
be4e1cd5
JO
720@cindex flat profile
721
722The @dfn{flat profile} shows the total amount of time your program
723spent executing each function. Unless the @samp{-z} option is given,
724functions with no apparent time spent in them, and no apparent calls
725to them, are not mentioned. Note that if a function was not compiled
726for profiling, and didn't run long enough to show up on the program
727counter histogram, it will be indistinguishable from a function that
728was never called.
729
730This is part of a flat profile for a small program:
731
732@smallexample
733@group
734Flat profile:
735
736Each sample counts as 0.01 seconds.
737 % cumulative self self total
738 time seconds seconds calls ms/call ms/call name
739 33.34 0.02 0.02 7208 0.00 0.00 open
740 16.67 0.03 0.01 244 0.04 0.12 offtime
741 16.67 0.04 0.01 8 1.25 1.25 memccpy
742 16.67 0.05 0.01 7 1.43 1.43 write
743 16.67 0.06 0.01 mcount
744 0.00 0.06 0.00 236 0.00 0.00 tzset
745 0.00 0.06 0.00 192 0.00 0.00 tolower
746 0.00 0.06 0.00 47 0.00 0.00 strlen
747 0.00 0.06 0.00 45 0.00 0.00 strchr
748 0.00 0.06 0.00 1 0.00 50.00 main
749 0.00 0.06 0.00 1 0.00 0.00 memcpy
750 0.00 0.06 0.00 1 0.00 10.11 print
751 0.00 0.06 0.00 1 0.00 0.00 profil
752 0.00 0.06 0.00 1 0.00 50.00 report
753@dots{}
754@end group
755@end smallexample
756
757@noindent
e2fd4231
ILT
758The functions are sorted by first by decreasing run-time spent in them,
759then by decreasing number of calls, then alphabetically by name. The
be4e1cd5
JO
760functions @samp{mcount} and @samp{profil} are part of the profiling
761aparatus and appear in every flat profile; their time gives a measure of
762the amount of overhead due to profiling.
763
e2fd4231
ILT
764Just before the column headers, a statement appears indicating
765how much time each sample counted as.
766This @dfn{sampling period} estimates the margin of error in each of the time
be4e1cd5 767figures. A time figure that is not much larger than this is not
e2fd4231
ILT
768reliable. In this example, each sample counted as 0.01 seconds,
769suggesting a 100 Hz sampling rate.
770The program's total execution time was 0.06
771seconds, as indicated by the @samp{cumulative seconds} field. Since
772each sample counted for 0.01 seconds, this means only six samples
773were taken during the run. Two of the samples occured while the
774program was in the @samp{open} function, as indicated by the
775@samp{self seconds} field. Each of the other four samples
776occured one each in @samp{offtime}, @samp{memccpy}, @samp{write},
777and @samp{mcount}.
778Since only six samples were taken, none of these values can
779be regarded as particularly reliable.
780In another run,
781the @samp{self seconds} field for
782@samp{mcount} might well be @samp{0.00} or @samp{0.02}.
be4e1cd5
JO
783@xref{Sampling Error}, for a complete discussion.
784
e2fd4231
ILT
785The remaining functions in the listing (those whose
786@samp{self seconds} field is @samp{0.00}) didn't appear
787in the histogram samples at all. However, the call graph
788indicated that they were called, so therefore they are listed,
789sorted in decreasing order by the @samp{calls} field.
790Clearly some time was spent executing these functions,
791but the paucity of histogram samples prevents any
792determination of how much time each took.
793
be4e1cd5
JO
794Here is what the fields in each line mean:
795
796@table @code
797@item % time
798This is the percentage of the total execution time your program spent
799in this function. These should all add up to 100%.
800
801@item cumulative seconds
802This is the cumulative total number of seconds the computer spent
803executing this functions, plus the time spent in all the functions
804above this one in this table.
805
806@item self seconds
807This is the number of seconds accounted for by this function alone.
808The flat profile listing is sorted first by this number.
809
810@item calls
811This is the total number of times the function was called. If the
812function was never called, or the number of times it was called cannot
813be determined (probably because the function was not compiled with
814profiling enabled), the @dfn{calls} field is blank.
815
816@item self ms/call
817This represents the average number of milliseconds spent in this
818function per call, if this function is profiled. Otherwise, this field
819is blank for this function.
820
821@item total ms/call
822This represents the average number of milliseconds spent in this
823function and its descendants per call, if this function is profiled.
824Otherwise, this field is blank for this function.
e2fd4231 825This is the only field in the flat profile that uses call graph analysis.
be4e1cd5
JO
826
827@item name
828This is the name of the function. The flat profile is sorted by this
e2fd4231
ILT
829field alphabetically after the @dfn{self seconds} and @dfn{calls}
830fields are sorted.
be4e1cd5
JO
831@end table
832
e2fd4231
ILT
833@node Call Graph,Line-by-line,Flat Profile,Output
834@section The Call Graph
be4e1cd5
JO
835@cindex call graph
836
837The @dfn{call graph} shows how much time was spent in each function
838and its children. From this information, you can find functions that,
839while they themselves may not have used much time, called other
840functions that did use unusual amounts of time.
841
842Here is a sample call from a small program. This call came from the
843same @code{gprof} run as the flat profile example in the previous
844chapter.
845
846@smallexample
847@group
848granularity: each sample hit covers 2 byte(s) for 20.00% of 0.05 seconds
849
850index % time self children called name
851 <spontaneous>
852[1] 100.0 0.00 0.05 start [1]
853 0.00 0.05 1/1 main [2]
854 0.00 0.00 1/2 on_exit [28]
855 0.00 0.00 1/1 exit [59]
856-----------------------------------------------
857 0.00 0.05 1/1 start [1]
858[2] 100.0 0.00 0.05 1 main [2]
859 0.00 0.05 1/1 report [3]
860-----------------------------------------------
861 0.00 0.05 1/1 main [2]
862[3] 100.0 0.00 0.05 1 report [3]
863 0.00 0.03 8/8 timelocal [6]
864 0.00 0.01 1/1 print [9]
865 0.00 0.01 9/9 fgets [12]
866 0.00 0.00 12/34 strncmp <cycle 1> [40]
867 0.00 0.00 8/8 lookup [20]
868 0.00 0.00 1/1 fopen [21]
869 0.00 0.00 8/8 chewtime [24]
870 0.00 0.00 8/16 skipspace [44]
871-----------------------------------------------
872[4] 59.8 0.01 0.02 8+472 <cycle 2 as a whole> [4]
873 0.01 0.02 244+260 offtime <cycle 2> [7]
874 0.00 0.00 236+1 tzset <cycle 2> [26]
875-----------------------------------------------
876@end group
877@end smallexample
878
879The lines full of dashes divide this table into @dfn{entries}, one for each
880function. Each entry has one or more lines.
881
882In each entry, the primary line is the one that starts with an index number
883in square brackets. The end of this line says which function the entry is
884for. The preceding lines in the entry describe the callers of this
885function and the following lines describe its subroutines (also called
886@dfn{children} when we speak of the call graph).
887
888The entries are sorted by time spent in the function and its subroutines.
889
890The internal profiling function @code{mcount} (@pxref{Flat Profile})
891is never mentioned in the call graph.
892
893@menu
894* Primary:: Details of the primary line's contents.
895* Callers:: Details of caller-lines' contents.
896* Subroutines:: Details of subroutine-lines' contents.
897* Cycles:: When there are cycles of recursion,
898 such as @code{a} calls @code{b} calls @code{a}@dots{}
899@end menu
900
901@node Primary
e2fd4231 902@subsection The Primary Line
be4e1cd5
JO
903
904The @dfn{primary line} in a call graph entry is the line that
905describes the function which the entry is about and gives the overall
906statistics for this function.
907
908For reference, we repeat the primary line from the entry for function
909@code{report} in our main example, together with the heading line that
910shows the names of the fields:
911
912@smallexample
913@group
914index % time self children called name
915@dots{}
916[3] 100.0 0.00 0.05 1 report [3]
917@end group
918@end smallexample
919
920Here is what the fields in the primary line mean:
921
922@table @code
923@item index
924Entries are numbered with consecutive integers. Each function
925therefore has an index number, which appears at the beginning of its
926primary line.
927
928Each cross-reference to a function, as a caller or subroutine of
929another, gives its index number as well as its name. The index number
930guides you if you wish to look for the entry for that function.
931
932@item % time
933This is the percentage of the total time that was spent in this
934function, including time spent in subroutines called from this
935function.
936
937The time spent in this function is counted again for the callers of
938this function. Therefore, adding up these percentages is meaningless.
939
940@item self
941This is the total amount of time spent in this function. This
942should be identical to the number printed in the @code{seconds} field
943for this function in the flat profile.
944
945@item children
946This is the total amount of time spent in the subroutine calls made by
947this function. This should be equal to the sum of all the @code{self}
948and @code{children} entries of the children listed directly below this
949function.
950
951@item called
952This is the number of times the function was called.
953
954If the function called itself recursively, there are two numbers,
955separated by a @samp{+}. The first number counts non-recursive calls,
956and the second counts recursive calls.
957
958In the example above, the function @code{report} was called once from
959@code{main}.
960
961@item name
962This is the name of the current function. The index number is
963repeated after it.
964
965If the function is part of a cycle of recursion, the cycle number is
966printed between the function's name and the index number
967(@pxref{Cycles}). For example, if function @code{gnurr} is part of
968cycle number one, and has index number twelve, its primary line would
969be end like this:
970
971@example
972gnurr <cycle 1> [12]
973@end example
974@end table
975
976@node Callers, Subroutines, Primary, Call Graph
e2fd4231 977@subsection Lines for a Function's Callers
be4e1cd5
JO
978
979A function's entry has a line for each function it was called by.
980These lines' fields correspond to the fields of the primary line, but
981their meanings are different because of the difference in context.
982
983For reference, we repeat two lines from the entry for the function
984@code{report}, the primary line and one caller-line preceding it, together
985with the heading line that shows the names of the fields:
986
987@smallexample
988index % time self children called name
989@dots{}
990 0.00 0.05 1/1 main [2]
991[3] 100.0 0.00 0.05 1 report [3]
992@end smallexample
993
994Here are the meanings of the fields in the caller-line for @code{report}
995called from @code{main}:
996
997@table @code
998@item self
999An estimate of the amount of time spent in @code{report} itself when it was
1000called from @code{main}.
1001
1002@item children
1003An estimate of the amount of time spent in subroutines of @code{report}
1004when @code{report} was called from @code{main}.
1005
1006The sum of the @code{self} and @code{children} fields is an estimate
1007of the amount of time spent within calls to @code{report} from @code{main}.
1008
1009@item called
1010Two numbers: the number of times @code{report} was called from @code{main},
1011followed by the total number of nonrecursive calls to @code{report} from
1012all its callers.
1013
1014@item name and index number
1015The name of the caller of @code{report} to which this line applies,
1016followed by the caller's index number.
1017
1018Not all functions have entries in the call graph; some
1019options to @code{gprof} request the omission of certain functions.
1020When a caller has no entry of its own, it still has caller-lines
1021in the entries of the functions it calls.
1022
1023If the caller is part of a recursion cycle, the cycle number is
1024printed between the name and the index number.
1025@end table
1026
1027If the identity of the callers of a function cannot be determined, a
1028dummy caller-line is printed which has @samp{<spontaneous>} as the
1029``caller's name'' and all other fields blank. This can happen for
1030signal handlers.
1031@c What if some calls have determinable callers' names but not all?
1032@c FIXME - still relevant?
1033
1034@node Subroutines, Cycles, Callers, Call Graph
e2fd4231 1035@subsection Lines for a Function's Subroutines
be4e1cd5
JO
1036
1037A function's entry has a line for each of its subroutines---in other
1038words, a line for each other function that it called. These lines'
1039fields correspond to the fields of the primary line, but their meanings
1040are different because of the difference in context.
1041
1042For reference, we repeat two lines from the entry for the function
1043@code{main}, the primary line and a line for a subroutine, together
1044with the heading line that shows the names of the fields:
1045
1046@smallexample
1047index % time self children called name
1048@dots{}
1049[2] 100.0 0.00 0.05 1 main [2]
1050 0.00 0.05 1/1 report [3]
1051@end smallexample
1052
1053Here are the meanings of the fields in the subroutine-line for @code{main}
1054calling @code{report}:
1055
1056@table @code
1057@item self
1058An estimate of the amount of time spent directly within @code{report}
1059when @code{report} was called from @code{main}.
1060
1061@item children
1062An estimate of the amount of time spent in subroutines of @code{report}
1063when @code{report} was called from @code{main}.
1064
1065The sum of the @code{self} and @code{children} fields is an estimate
1066of the total time spent in calls to @code{report} from @code{main}.
1067
1068@item called
1069Two numbers, the number of calls to @code{report} from @code{main}
1070followed by the total number of nonrecursive calls to @code{report}.
e2fd4231
ILT
1071This ratio is used to determine how much of @code{report}'s @code{self}
1072and @code{children} time gets credited to @code{main}.
1073@xref{Assumptions}.
be4e1cd5
JO
1074
1075@item name
1076The name of the subroutine of @code{main} to which this line applies,
1077followed by the subroutine's index number.
1078
1079If the caller is part of a recursion cycle, the cycle number is
1080printed between the name and the index number.
1081@end table
1082
1083@node Cycles,, Subroutines, Call Graph
e2fd4231 1084@subsection How Mutually Recursive Functions Are Described
be4e1cd5
JO
1085@cindex cycle
1086@cindex recursion cycle
1087
1088The graph may be complicated by the presence of @dfn{cycles of
1089recursion} in the call graph. A cycle exists if a function calls
1090another function that (directly or indirectly) calls (or appears to
1091call) the original function. For example: if @code{a} calls @code{b},
1092and @code{b} calls @code{a}, then @code{a} and @code{b} form a cycle.
1093
e2fd4231 1094Whenever there are call paths both ways between a pair of functions, they
be4e1cd5
JO
1095belong to the same cycle. If @code{a} and @code{b} call each other and
1096@code{b} and @code{c} call each other, all three make one cycle. Note that
1097even if @code{b} only calls @code{a} if it was not called from @code{a},
1098@code{gprof} cannot determine this, so @code{a} and @code{b} are still
1099considered a cycle.
1100
1101The cycles are numbered with consecutive integers. When a function
1102belongs to a cycle, each time the function name appears in the call graph
1103it is followed by @samp{<cycle @var{number}>}.
1104
1105The reason cycles matter is that they make the time values in the call
1106graph paradoxical. The ``time spent in children'' of @code{a} should
1107include the time spent in its subroutine @code{b} and in @code{b}'s
1108subroutines---but one of @code{b}'s subroutines is @code{a}! How much of
1109@code{a}'s time should be included in the children of @code{a}, when
1110@code{a} is indirectly recursive?
1111
1112The way @code{gprof} resolves this paradox is by creating a single entry
1113for the cycle as a whole. The primary line of this entry describes the
1114total time spent directly in the functions of the cycle. The
1115``subroutines'' of the cycle are the individual functions of the cycle, and
1116all other functions that were called directly by them. The ``callers'' of
1117the cycle are the functions, outside the cycle, that called functions in
1118the cycle.
1119
1120Here is an example portion of a call graph which shows a cycle containing
1121functions @code{a} and @code{b}. The cycle was entered by a call to
1122@code{a} from @code{main}; both @code{a} and @code{b} called @code{c}.
1123
1124@smallexample
1125index % time self children called name
1126----------------------------------------
1127 1.77 0 1/1 main [2]
1128[3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
1129 1.02 0 3 b <cycle 1> [4]
1130 0.75 0 2 a <cycle 1> [5]
1131----------------------------------------
1132 3 a <cycle 1> [5]
1133[4] 52.85 1.02 0 0 b <cycle 1> [4]
1134 2 a <cycle 1> [5]
1135 0 0 3/6 c [6]
1136----------------------------------------
1137 1.77 0 1/1 main [2]
1138 2 b <cycle 1> [4]
1139[5] 38.86 0.75 0 1 a <cycle 1> [5]
1140 3 b <cycle 1> [4]
1141 0 0 3/6 c [6]
1142----------------------------------------
1143@end smallexample
1144
1145@noindent
1146(The entire call graph for this program contains in addition an entry for
1147@code{main}, which calls @code{a}, and an entry for @code{c}, with callers
1148@code{a} and @code{b}.)
1149
1150@smallexample
1151index % time self children called name
1152 <spontaneous>
1153[1] 100.00 0 1.93 0 start [1]
1154 0.16 1.77 1/1 main [2]
1155----------------------------------------
1156 0.16 1.77 1/1 start [1]
1157[2] 100.00 0.16 1.77 1 main [2]
1158 1.77 0 1/1 a <cycle 1> [5]
1159----------------------------------------
1160 1.77 0 1/1 main [2]
1161[3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
1162 1.02 0 3 b <cycle 1> [4]
1163 0.75 0 2 a <cycle 1> [5]
1164 0 0 6/6 c [6]
1165----------------------------------------
1166 3 a <cycle 1> [5]
1167[4] 52.85 1.02 0 0 b <cycle 1> [4]
1168 2 a <cycle 1> [5]
1169 0 0 3/6 c [6]
1170----------------------------------------
1171 1.77 0 1/1 main [2]
1172 2 b <cycle 1> [4]
1173[5] 38.86 0.75 0 1 a <cycle 1> [5]
1174 3 b <cycle 1> [4]
1175 0 0 3/6 c [6]
1176----------------------------------------
1177 0 0 3/6 b <cycle 1> [4]
1178 0 0 3/6 a <cycle 1> [5]
1179[6] 0.00 0 0 6 c [6]
1180----------------------------------------
1181@end smallexample
1182
1183The @code{self} field of the cycle's primary line is the total time
1184spent in all the functions of the cycle. It equals the sum of the
1185@code{self} fields for the individual functions in the cycle, found
1186in the entry in the subroutine lines for these functions.
1187
1188The @code{children} fields of the cycle's primary line and subroutine lines
1189count only subroutines outside the cycle. Even though @code{a} calls
1190@code{b}, the time spent in those calls to @code{b} is not counted in
1191@code{a}'s @code{children} time. Thus, we do not encounter the problem of
1192what to do when the time in those calls to @code{b} includes indirect
1193recursive calls back to @code{a}.
1194
1195The @code{children} field of a caller-line in the cycle's entry estimates
1196the amount of time spent @emph{in the whole cycle}, and its other
1197subroutines, on the times when that caller called a function in the cycle.
1198
1199The @code{calls} field in the primary line for the cycle has two numbers:
1200first, the number of times functions in the cycle were called by functions
1201outside the cycle; second, the number of times they were called by
1202functions in the cycle (including times when a function in the cycle calls
1203itself). This is a generalization of the usual split into nonrecursive and
1204recursive calls.
1205
1206The @code{calls} field of a subroutine-line for a cycle member in the
1207cycle's entry says how many time that function was called from functions in
1208the cycle. The total of all these is the second number in the primary line's
1209@code{calls} field.
1210
1211In the individual entry for a function in a cycle, the other functions in
1212the same cycle can appear as subroutines and as callers. These lines show
1213how many times each function in the cycle called or was called from each other
1214function in the cycle. The @code{self} and @code{children} fields in these
1215lines are blank because of the difficulty of defining meanings for them
1216when recursion is going on.
1217
e2fd4231
ILT
1218@node Line-by-line,Annotated Source,Call Graph,Output
1219@section Line-by-line Profiling
be4e1cd5 1220
e2fd4231
ILT
1221@code{gprof}'s @samp{-l} option causes the program to perform
1222@dfn{line-by-line} profiling. In this mode, histogram
1223samples are assigned not to functions, but to individual
1224lines of source code. The program usually must be compiled
1225with a @samp{-g} option, in addition to @samp{-pg}, in order
1226to generate debugging symbols for tracking source code lines.
be4e1cd5 1227
e2fd4231
ILT
1228The flat profile is the most useful output table
1229in line-by-line mode.
1230The call graph isn't as useful as normal, since
1231the current version of @code{gprof} does not propagate
1232call graph arcs from source code lines to the enclosing function.
1233The call graph does, however, show each line of code
1234that called each function, along with a count.
be4e1cd5 1235
e2fd4231
ILT
1236Here is a section of @code{gprof}'s output, without line-by-line profiling.
1237Note that @code{ct_init} accounted for four histogram hits, and
123813327 calls to @code{init_block}.
be4e1cd5 1239
e2fd4231
ILT
1240@smallexample
1241Flat profile:
be4e1cd5 1242
e2fd4231
ILT
1243Each sample counts as 0.01 seconds.
1244 % cumulative self self total
1245 time seconds seconds calls us/call us/call name
1246 30.77 0.13 0.04 6335 6.31 6.31 ct_init
be4e1cd5 1247
e2fd4231
ILT
1248
1249 Call graph (explanation follows)
1250
1251
1252granularity: each sample hit covers 4 byte(s) for 7.69% of 0.13 seconds
1253
1254index % time self children called name
1255
1256 0.00 0.00 1/13496 name_too_long
1257 0.00 0.00 40/13496 deflate
1258 0.00 0.00 128/13496 deflate_fast
1259 0.00 0.00 13327/13496 ct_init
1260[7] 0.0 0.00 0.00 13496 init_block
1261
1262@end smallexample
1263
1264Now let's look at some of @code{gprof}'s output from the same program run,
1265this time with line-by-line profiling enabled. Note that @code{ct_init}'s
1266four histogram hits are broken down into four lines of source code - one hit
1267occured on each of lines 349, 351, 382 and 385. In the call graph,
1268note how
1269@code{ct_init}'s 13327 calls to @code{init_block} are broken down
1270into one call from line 396, 3071 calls from line 384, 3730 calls
1271from line 385, and 6525 calls from 387.
1272
1273@smallexample
1274Flat profile:
1275
1276Each sample counts as 0.01 seconds.
1277 % cumulative self
1278 time seconds seconds calls name
1279 7.69 0.10 0.01 ct_init (trees.c:349)
1280 7.69 0.11 0.01 ct_init (trees.c:351)
1281 7.69 0.12 0.01 ct_init (trees.c:382)
1282 7.69 0.13 0.01 ct_init (trees.c:385)
1283
1284
1285 Call graph (explanation follows)
1286
1287
1288granularity: each sample hit covers 4 byte(s) for 7.69% of 0.13 seconds
1289
1290 % time self children called name
1291
1292 0.00 0.00 1/13496 name_too_long (gzip.c:1440)
1293 0.00 0.00 1/13496 deflate (deflate.c:763)
1294 0.00 0.00 1/13496 ct_init (trees.c:396)
1295 0.00 0.00 2/13496 deflate (deflate.c:727)
1296 0.00 0.00 4/13496 deflate (deflate.c:686)
1297 0.00 0.00 5/13496 deflate (deflate.c:675)
1298 0.00 0.00 12/13496 deflate (deflate.c:679)
1299 0.00 0.00 16/13496 deflate (deflate.c:730)
1300 0.00 0.00 128/13496 deflate_fast (deflate.c:654)
1301 0.00 0.00 3071/13496 ct_init (trees.c:384)
1302 0.00 0.00 3730/13496 ct_init (trees.c:385)
1303 0.00 0.00 6525/13496 ct_init (trees.c:387)
1304[6] 0.0 0.00 0.00 13496 init_block (trees.c:408)
1305
1306@end smallexample
1307
1308
1309@node Annotated Source,,Line-by-line,Output
1310@section The Annotated Source Listing
1311
1312@code{gprof}'s @samp{-A} option triggers an annotated source listing,
1313which lists the program's source code, each function labeled with the
1314number of times it was called. You may also need to specify the
1315@samp{-I} option, if @code{gprof} can't find the source code files.
1316
1317Compiling with @samp{gcc @dots{} -g -pg -a} augments your program
1318with basic-block counting code, in addition to function counting code.
1319This enables @code{gprof} to determine how many times each line
1320of code was exeucted.
1321For example, consider the following function, taken from gzip,
1322with line numbers added:
1323
1324@smallexample
1325 1 ulg updcrc(s, n)
1326 2 uch *s;
1327 3 unsigned n;
1328 4 @{
1329 5 register ulg c;
1330 6
1331 7 static ulg crc = (ulg)0xffffffffL;
1332 8
1333 9 if (s == NULL) @{
133410 c = 0xffffffffL;
133511 @} else @{
133612 c = crc;
133713 if (n) do @{
133814 c = crc_32_tab[...];
133915 @} while (--n);
134016 @}
134117 crc = c;
134218 return c ^ 0xffffffffL;
134319 @}
1344
1345@end smallexample
1346
1347@code{updcrc} has at least five basic-blocks.
1348One is the function itself. The
1349@code{if} statement on line 9 generates two more basic-blocks, one
1350for each branch of the @code{if}. A fourth basic-block results from
1351the @code{if} on line 13, and the contents of the @code{do} loop form
1352the fifth basic-block. The compiler may also generate additional
1353basic-blocks to handle various special cases.
1354
1355A program augmented for basic-block counting can be analyzed with
1356@code{gprof -l -A}. I also suggest use of the @samp{-x} option,
1357which ensures that each line of code is labeled at least once.
1358Here is @code{updcrc}'s
1359annotated source listing for a sample @code{gzip} run:
1360
1361@smallexample
1362 ulg updcrc(s, n)
1363 uch *s;
1364 unsigned n;
1365 2 ->@{
1366 register ulg c;
1367
1368 static ulg crc = (ulg)0xffffffffL;
1369
1370 2 -> if (s == NULL) @{
1371 1 -> c = 0xffffffffL;
1372 1 -> @} else @{
1373 1 -> c = crc;
1374 1 -> if (n) do @{
1375 26312 -> c = crc_32_tab[...];
137626312,1,26311 -> @} while (--n);
1377 @}
1378 2 -> crc = c;
1379 2 -> return c ^ 0xffffffffL;
1380 2 ->@}
1381@end smallexample
1382
1383In this example, the function was called twice, passing once through
1384each branch of the @code{if} statement. The body of the @code{do}
1385loop was executed a total of 26312 times. Note how the @code{while}
1386statement is annotated. It began execution 26312 times, once for
1387each iteration through the loop. One of those times (the last time)
1388it exited, while it branched back to the beginning of the loop 26311 times.
1389
1390@node Inaccuracy
1391@chapter Inaccuracy of @code{gprof} Output
1392
1393@menu
1394* Sampling Error:: Statistical margins of error
1395* Assumptions:: Estimating children times
1396@end menu
1397
1398@node Sampling Error,Assumptions,,Inaccuracy
1399@section Statistical Sampling Error
be4e1cd5
JO
1400
1401The run-time figures that @code{gprof} gives you are based on a sampling
1402process, so they are subject to statistical inaccuracy. If a function runs
1403only a small amount of time, so that on the average the sampling process
1404ought to catch that function in the act only once, there is a pretty good
1405chance it will actually find that function zero times, or twice.
1406
e2fd4231
ILT
1407By contrast, the number-of-calls and basic-block figures
1408are derived by counting, not
be4e1cd5
JO
1409sampling. They are completely accurate and will not vary from run to run
1410if your program is deterministic.
1411
1412The @dfn{sampling period} that is printed at the beginning of the flat
1413profile says how often samples are taken. The rule of thumb is that a
1414run-time figure is accurate if it is considerably bigger than the sampling
1415period.
1416
e2fd4231
ILT
1417The actual amount of error can be predicted.
1418For @var{n} samples, the @emph{expected} error
1419is the square-root of @var{n}. For example,
1420if the sampling period is 0.01 seconds and @code{foo}'s run-time is 1 second,
1421@var{n} is 100 samples (1 second/0.01 seconds), sqrt(@var{n}) is 10 samples, so
1422the expected error in @code{foo}'s run-time is 0.1 seconds (10*0.01 seconds),
1423or ten percent of the observed value.
1424Again, if the sampling period is 0.01 seconds and @code{bar}'s run-time is
1425100 seconds, @var{n} is 10000 samples, sqrt(@var{n}) is 100 samples, so
1426the expected error in @code{bar}'s run-time is 1 second,
1427or one percent of the observed value.
1428It is likely to
be4e1cd5
JO
1429vary this much @emph{on the average} from one profiling run to the next.
1430(@emph{Sometimes} it will vary more.)
1431
1432This does not mean that a small run-time figure is devoid of information.
1433If the program's @emph{total} run-time is large, a small run-time for one
1434function does tell you that that function used an insignificant fraction of
1435the whole program's time. Usually this means it is not worth optimizing.
1436
1437One way to get more accuracy is to give your program more (but similar)
1438input data so it will take longer. Another way is to combine the data from
1439several runs, using the @samp{-s} option of @code{gprof}. Here is how:
1440
1441@enumerate
1442@item
1443Run your program once.
1444
1445@item
1446Issue the command @samp{mv gmon.out gmon.sum}.
1447
1448@item
1449Run your program again, the same as before.
1450
1451@item
1452Merge the new data in @file{gmon.out} into @file{gmon.sum} with this command:
1453
1454@example
1455gprof -s @var{executable-file} gmon.out gmon.sum
1456@end example
1457
1458@item
1459Repeat the last two steps as often as you wish.
1460
1461@item
1462Analyze the cumulative data using this command:
1463
1464@example
1465gprof @var{executable-file} gmon.sum > @var{output-file}
1466@end example
1467@end enumerate
1468
e2fd4231
ILT
1469@node Assumptions,,Sampling Error,Inaccuracy
1470@section Estimating @code{children} Times
be4e1cd5
JO
1471
1472Some of the figures in the call graph are estimates---for example, the
1473@code{children} time values and all the the time figures in caller and
1474subroutine lines.
1475
1476There is no direct information about these measurements in the profile
1477data itself. Instead, @code{gprof} estimates them by making an assumption
1478about your program that might or might not be true.
1479
1480The assumption made is that the average time spent in each call to any
1481function @code{foo} is not correlated with who called @code{foo}. If
1482@code{foo} used 5 seconds in all, and 2/5 of the calls to @code{foo} came
1483from @code{a}, then @code{foo} contributes 2 seconds to @code{a}'s
1484@code{children} time, by assumption.
1485
1486This assumption is usually true enough, but for some programs it is far
1487from true. Suppose that @code{foo} returns very quickly when its argument
1488is zero; suppose that @code{a} always passes zero as an argument, while
1489other callers of @code{foo} pass other arguments. In this program, all the
1490time spent in @code{foo} is in the calls from callers other than @code{a}.
1491But @code{gprof} has no way of knowing this; it will blindly and
1492incorrectly charge 2 seconds of time in @code{foo} to the children of
1493@code{a}.
1494
1495@c FIXME - has this been fixed?
1496We hope some day to put more complete data into @file{gmon.out}, so that
1497this assumption is no longer needed, if we can figure out how. For the
1498nonce, the estimated figures are usually more useful than misleading.
1499
e2fd4231
ILT
1500@node How do I?
1501@chapter Answers to Common Questions
1502
1503@table @asis
1504@item How do I find which lines in my program were executed the most times?
1505
1506Compile your program with basic-block counting enabled, run it, then
1507use the following pipeline:
1508
1509@example
1510gprof -l -C @var{objfile} | sort -k 3 -n -r
1511@end example
1512
1513This listing will show you the lines in your code executed most often,
1514but not necessarily those that consumed the most time.
1515
1516@item How do I find which lines in my program called a particular function?
1517
1518Use @code{gprof -l} and lookup the function in the call graph.
1519The callers will be broken down by function and line number.
1520
1521@item How do I analyze a program that runs for less than a second?
1522
1523Try using a shell script like this one:
1524
1525@example
1526for i in `seq 1 100`; do
1527 fastprog
1528 mv gmon.out gmon.out.$i
1529done
1530
1531gprof -s fastprog gmon.out.*
1532
1533gprof fastprog gmon.sum
1534@end example
1535
1536If your program is completely deterministic, all the call counts
1537will be simple multiples of 100 (i.e. a function called once in
1538each run will appear with a call count of 100).
1539
1540@end table
1541
1542@node Incompatibilities
be4e1cd5
JO
1543@chapter Incompatibilities with Unix @code{gprof}
1544
1545@sc{gnu} @code{gprof} and Berkeley Unix @code{gprof} use the same data
1546file @file{gmon.out}, and provide essentially the same information. But
1547there are a few differences.
1548
1549@itemize @bullet
e2fd4231
ILT
1550@item
1551@sc{gnu} @code{gprof} uses a new, generalized file format with support
1552for basic-block execution counts and non-realtime histograms. A magic
1553cookie and version number allows @code{gprof} to easily identify
1554new style files. Old BSD-style files can still be read.
1555@xref{File Format}.
1556
be4e1cd5
JO
1557@item
1558For a recursive function, Unix @code{gprof} lists the function as a
1559parent and as a child, with a @code{calls} field that lists the number
1560of recursive calls. @sc{gnu} @code{gprof} omits these lines and puts
1561the number of recursive calls in the primary line.
1562
1563@item
1564When a function is suppressed from the call graph with @samp{-e}, @sc{gnu}
1565@code{gprof} still lists it as a subroutine of functions that call it.
1566
e2fd4231
ILT
1567@item
1568@sc{gnu} @code{gprof} accepts the @samp{-k} with its argument
1569in the form @samp{from/to}, instead of @samp{from to}.
1570
1571@item
1572In the annotated source listing,
1573if there are multiple basic blocks on the same line,
1574@sc{gnu} @code{gprof} prints all of their counts, seperated by commas.
1575
be4e1cd5
JO
1576@ignore - it does this now
1577@item
1578The function names printed in @sc{gnu} @code{gprof} output do not include
1579the leading underscores that are added internally to the front of all
1580C identifiers on many operating systems.
1581@end ignore
1582
1583@item
1584The blurbs, field widths, and output formats are different. @sc{gnu}
1585@code{gprof} prints blurbs after the tables, so that you can see the
1586tables without skipping the blurbs.
c142a1f5 1587@end itemize
be4e1cd5 1588
e2fd4231
ILT
1589@node Details
1590@chapter Details of Profiling
be4e1cd5 1591
e2fd4231
ILT
1592@menu
1593* Implementation:: How a program collets profiling information
1594* File Format:: Format of @samp{gmon.out} files
1595* Internals:: @code{gprof}'s internal operation
1596* Debugging:: Using @code{gprof}'s @samp{-d} option
1597@end menu
1598
1599@node Implementation,File Format,,Details
1600@section Implementation of Profiling
1601
1602Profiling works by changing how every function in your program is compiled
1603so that when it is called, it will stash away some information about where
1604it was called from. From this, the profiler can figure out what function
1605called it, and can count how many times it was called. This change is made
1606by the compiler when your program is compiled with the @samp{-pg} option,
1607which causes every function to call @code{mcount}
1608(or @code{_mcount}, or @code{__mcount}, depending on the OS and compiler)
1609as one of its first operations.
1610
1611The @code{mcount} routine, included in the profiling library,
1612is responsible for recording in an in-memory call graph table
1613both its parent routine (the child) and its parent's parent. This is
1614typically done by examining the stack frame to find both
1615the address of the child, and the return address in the original parent.
1616Since this is a very machine-dependant operation, @code{mcount}
1617itself is typically a short assembly-language stub routine
1618that extracts the required
1619information, and then calls @code{__mcount_internal}
1620(a normal C function) with two arguments - @code{frompc} and @code{selfpc}.
1621@code{__mcount_internal} is responsible for maintaining
1622the in-memory call graph, which records @code{frompc}, @code{selfpc},
1623and the number of times each of these call arcs was transversed.
1624
1625GCC Version 2 provides a magical function (@code{__builtin_return_address}),
1626which allows a generic @code{mcount} function to extract the
1627required information from the stack frame. However, on some
1628architectures, most notably the SPARC, using this builtin can be
1629very computationally expensive, and an assembly language version
1630of @code{mcount} is used for performance reasons.
1631
1632Number-of-calls information for library routines is collected by using a
1633special version of the C library. The programs in it are the same as in
1634the usual C library, but they were compiled with @samp{-pg}. If you
1635link your program with @samp{gcc @dots{} -pg}, it automatically uses the
1636profiling version of the library.
1637
1638Profiling also involves watching your program as it runs, and keeping a
1639histogram of where the program counter happens to be every now and then.
1640Typically the program counter is looked at around 100 times per second of
1641run time, but the exact frequency may vary from system to system.
be4e1cd5 1642
e2fd4231
ILT
1643This is done is one of two ways. Most UNIX-like operating systems
1644provide a @code{profil()} system call, which registers a memory
1645array with the kernel, along with a scale
1646factor that determines how the program's address space maps
1647into the array.
1648Typical scaling values cause every 2 to 8 bytes of address space
1649to map into a single array slot.
1650On every tick of the system clock
1651(assuming the profiled program is running), the value of the
1652program counter is examined and the corresponding slot in
1653the memory array is incremented. Since this is done in the kernel,
1654which had to interrupt the process anyway to handle the clock
1655interrupt, very little additional system overhead is required.
1656
1657However, some operating systems, most notably Linux 2.0 (and earlier),
1658do not provide a @code{profil()} system call. On such a system,
1659arrangements are made for the kernel to periodically deliver
1660a signal to the process (typically via @code{setitimer()}),
1661which then performs the same operation of examining the
1662program counter and incrementing a slot in the memory array.
1663Since this method requires a signal to be delivered to
1664user space every time a sample is taken, it uses considerably
1665more overhead than kernel-based profiling. Also, due to the
1666added delay required to deliver the signal, this method is
1667less accurate as well.
1668
1669A special startup routine allocates memory for the histogram and
1670either calls @code{profil()} or sets up
1671a clock signal handler.
1672This routine (@code{monstartup}) can be invoked in several ways.
1673On Linux systems, a special profiling startup file @code{gcrt0.o},
1674which invokes @code{monstartup} before @code{main},
1675is used instead of the default @code{crt0.o}.
1676Use of this special startup file is one of the effects
1677of using @samp{gcc @dots{} -pg} to link.
1678On SPARC systems, no special startup files are used.
1679Rather, the @code{mcount} routine, when it is invoked for
1680the first time (typically when @code{main} is called),
1681calls @code{monstartup}.
1682
1683If the compiler's @samp{-a} option was used, basic-block counting
1684is also enabled. Each object file is then compiled with a static array
1685of counts, initially zero.
1686In the executable code, every time a new basic-block begins
1687(i.e. when an @code{if} statement appears), an extra instruction
1688is inserted to increment the corresponding count in the array.
1689At compile time, a paired array was constructed that recorded
1690the starting address of each basic-block. Taken together,
1691the two arrays record the starting address of every basic-block,
1692along with the number of times it was executed.
1693
1694The profiling library also includes a function (@code{mcleanup}) which is
1695typically registered using @code{atexit()} to be called as the
1696program exits, and is responsible for writing the file @file{gmon.out}.
1697Profiling is turned off, various headers are output, and the histogram
1698is written, followed by the call-graph arcs and the basic-block counts.
be4e1cd5 1699
e2fd4231
ILT
1700The output from @code{gprof} gives no indication of parts of your program that
1701are limited by I/O or swapping bandwidth. This is because samples of the
1702program counter are taken at fixed intervals of the program's run time.
1703Therefore, the
1704time measurements in @code{gprof} output say nothing about time that your
1705program was not running. For example, a part of the program that creates
1706so much data that it cannot all fit in physical memory at once may run very
1707slowly due to thrashing, but @code{gprof} will say it uses little time. On
1708the other hand, sampling by run time has the advantage that the amount of
1709load due to other users won't directly affect the output you get.
1710
1711@node File Format,Internals,Implementation,Details
1712@section Profiling Data File Format
1713
1714The old BSD-derived file format used for profile data does not contain a
1715magic cookie that allows to check whether a data file really is a
1716gprof file. Furthermore, it does not provide a version number, thus
1717rendering changes to the file format almost impossible. @sc{gnu} @code{gprof}
1718uses a new file format that provides these features. For backward
1719compatibility, @sc{gnu} @code{gprof} continues to support the old BSD-derived
1720format, but not all features are supported with it. For example,
1721basic-block execution counts cannot be accommodated by the old file
1722format.
1723
1724The new file format is defined in header file @file{gmon_out.h}. It
1725consists of a header containing the magic cookie and a version number,
1726as well as some spare bytes available for future extensions. All data
1727in a profile data file is in the native format of the host on which
1728the profile was collected. @sc{gnu} @code{gprof} adapts automatically to the
1729byte-order in use.
1730
1731In the new file format, the header is followed by a sequence of
1732records. Currently, there are three different record types: histogram
1733records, call-graph arc records, and basic-block execution count
1734records. Each file can contain any number of each record type. When
1735reading a file, @sc{gnu} @code{gprof} will ensure records of the same type are
1736compatible with each other and compute the union of all records. For
1737example, for basic-block execution counts, the union is simply the sum
1738of all execution counts for each basic-block.
1739
1740@subsection Histogram Records
1741
1742Histogram records consist of a header that is followed by an array of
1743bins. The header contains the text-segment range that the histogram
1744spans, the size of the histogram in bytes (unlike in the old BSD
1745format, this does not include the size of the header), the rate of the
1746profiling clock, and the physical dimension that the bin counts
1747represent after being scaled by the profiling clock rate. The
1748physical dimension is specified in two parts: a long name of up to 15
1749characters and a single character abbreviation. For example, a
1750histogram representing real-time would specify the long name as
1751"seconds" and the abbreviation as "s". This feature is useful for
1752architectures that support performance monitor hardware (which,
1753fortunately, is becoming increasingly common). For example, under DEC
1754OSF/1, the "uprofile" command can be used to produce a histogram of,
1755say, instruction cache misses. In this case, the dimension in the
1756histogram header could be set to "i-cache misses" and the abbreviation
1757could be set to "1" (because it is simply a count, not a physical
1758dimension). Also, the profiling rate would have to be set to 1 in
1759this case.
1760
1761Histogram bins are 16-bit numbers and each bin represent an equal
1762amount of text-space. For example, if the text-segment is one
1763thousand bytes long and if there are ten bins in the histogram, each
1764bin represents one hundred bytes.
1765
1766
1767@subsection Call-Graph Records
1768
1769Call-graph records have a format that is identical to the one used in
1770the BSD-derived file format. It consists of an arc in the call graph
1771and a count indicating the number of times the arc was traversed
1772during program execution. Arcs are specified by a pair of addresses:
1773the first must be within caller's function and the second must be
1774within the callee's function. When performing profiling at the
1775function level, these addresses can point anywhere within the
1776respective function. However, when profiling at the line-level, it is
1777better if the addresses are as close to the call-site/entry-point as
1778possible. This will ensure that the line-level call-graph is able to
1779identify exactly which line of source code performed calls to a
1780function.
1781
1782@subsection Basic-Block Execution Count Records
1783
1784Basic-block execution count records consist of a header followed by a
1785sequence of address/count pairs. The header simply specifies the
1786length of the sequence. In an address/count pair, the address
1787identifies a basic-block and the count specifies the number of times
1788that basic-block was executed. Any address within the basic-address can
1789be used.
1790
1791@node Internals,Debugging,File Format,Details
1792@section @code{gprof}'s Internal Operation
1793
1794Like most programs, @code{gprof} begins by processing its options.
1795During this stage, it may building its symspec list
1796(@code{sym_ids.c:sym_id_add}), if
1797options are specified which use symspecs.
1798@code{gprof} maintains a single linked list of symspecs,
1799which will eventually get turned into 12 symbol tables,
1800organized into six include/exclude pairs - one
1801pair each for the flat profile (INCL_FLAT/EXCL_FLAT),
1802the call graph arcs (INCL_ARCS/EXCL_ARCS),
1803printing in the call graph (INCL_GRAPH/EXCL_GRAPH),
1804timing propagation in the call graph (INCL_TIME/EXCL_TIME),
1805the annotated source listing (INCL_ANNO/EXCL_ANNO),
1806and the execution count listing (INCL_EXEC/EXCL_EXEC).
1807
1808After option processing, @code{gprof} finishes
1809building the symspec list by adding all the symspecs in
1810@code{default_excluded_list} to the exclude lists
1811EXCL_TIME and EXCL_GRAPH, and if line-by-line profiling is specified,
1812EXCL_FLAT as well.
1813These default excludes are not added to EXCL_ANNO, EXCL_ARCS, and EXCL_EXEC.
1814
1815Next, the BFD library is called to open the object file,
1816verify that it is an object file,
1817and read its symbol table (@code{core.c:core_init}),
1818using @code{bfd_canonicalize_symtab} after mallocing
1819an appropiate sized array of asymbols. At this point,
1820function mappings are read (if the @samp{--file-ordering} option
1821has been specified), and the core text space is read into
1822memory (if the @samp{-c} option was given).
1823
1824@code{gprof}'s own symbol table, an array of Sym structures,
1825is now built.
1826This is done in one of two ways, by one of two routines, depending
1827on whether line-by-line profiling (@samp{-l} option) has been
1828enabled.
1829For normal profiling, the BFD canonical symbol table is scanned.
1830For line-by-line profiling, every
1831text space address is examined, and a new symbol table entry
1832gets created every time the line number changes.
1833In either case, two passes are made through the symbol
1834table - one to count the size of the symbol table required,
1835and the other to actually read the symbols. In between the
1836two passes, a single array of type @code{Sym} is created of
1837the appropiate length.
1838Finally, @code{symtab.c:symtab_finalize}
1839is called to sort the symbol table and remove duplicate entries
1840(entries with the same memory address).
1841
1842The symbol table must be a contiguous array for two reasons.
1843First, the @code{qsort} library function (which sorts an array)
1844will be used to sort the symbol table.
1845Also, the symbol lookup routine (@code{symtab.c:sym_lookup}),
1846which finds symbols
1847based on memory address, uses a binary search algorithm
1848which requires the symbol table to be a sorted array.
1849Function symbols are indicated with an @code{is_func} flag.
1850Line number symbols have no special flags set.
1851Additionally, a symbol can have an @code{is_static} flag
1852to indicate that it is a local symbol.
1853
1854With the symbol table read, the symspecs can now be translated
1855into Syms (@code{sym_ids.c:sym_id_parse}). Remember that a single
1856symspec can match multiple symbols.
1857An array of symbol tables
1858(@code{syms}) is created, each entry of which is a symbol table
1859of Syms to be included or excluded from a particular listing.
1860The master symbol table and the symspecs are examined by nested
1861loops, and every symbol that matches a symspec is inserted
1862into the appropriate syms table. This is done twice, once to
1863count the size of each required symbol table, and again to build
1864the tables, which have been malloced between passes.
1865From now on, to determine whether a symbol is on an include
1866or exclude symspec list, @code{gprof} simply uses its
1867standard symbol lookup routine on the appropriate table
1868in the @code{syms} array.
1869
1870Now the profile data file(s) themselves are read
1871(@code{gmon_io.c:gmon_out_read}),
1872first by checking for a new-style @samp{gmon.out} header,
1873then assuming this is an old-style BSD @samp{gmon.out}
1874if the magic number test failed.
1875
1876New-style histogram records are read by @code{hist.c:hist_read_rec}.
1877For the first histogram record, allocate a memory array to hold
1878all the bins, and read them in.
1879When multiple profile data files (or files with multiple histogram
1880records) are read, the starting address, ending address, number
1881of bins and sampling rate must match between the various histograms,
1882or a fatal error will result.
1883If everything matches, just sum the additional histograms into
1884the existing in-memory array.
1885
1886As each call graph record is read (@code{call_graph.c:cg_read_rec}),
1887the parent and child addresses
1888are matched to symbol table entries, and a call graph arc is
1889created by @code{cg_arcs.c:arc_add}, unless the arc fails a symspec
1890check against INCL_ARCS/EXCL_ARCS. As each arc is added,
1891a linked list is maintained of the parent's child arcs, and of the child's
1892parent arcs.
1893Both the child's call count and the arc's call count are
1894incremented by the record's call count.
1895
1896Basic-block records are read (@code{basic_blocks.c:bb_read_rec}),
1897but only if line-by-line profiling has been selected.
1898Each basic-block address is matched to a corresponding line
1899symbol in the symbol table, and an entry made in the symbol's
1900bb_addr and bb_calls arrays. Again, if multiple basic-block
1901records are present for the same address, the call counts
1902are cumulative.
1903
1904A gmon.sum file is dumped, if requested (@code{gmon_io.c:gmon_out_write}).
1905
1906If histograms were present in the data files, assign them to symbols
1907(@code{hist.c:hist_assign_samples}) by iterating over all the sample
1908bins and assigning them to symbols. Since the symbol table
1909is sorted in order of ascending memory addresses, we can
1910simple follow along in the symbol table as we make our pass
1911over the sample bins.
1912This step includes a symspec check against INCL_FLAT/EXCL_FLAT.
1913Depending on the histogram
1914scale factor, a sample bin may span multiple symbols,
1915in which case a fraction of the sample count is allocated
1916to each symbol, proportional to the degree of overlap.
1917This effect is rare for normal profiling, but overlaps
1918are more common during line-by-line profiling, and can
1919cause each of two adjacent lines to be credited with half
1920a hit, for example.
1921
1922If call graph data is present, @code{cg_arcs.c:cg_assemble} is called.
1923First, if @samp{-c} was specified, a machine-dependant
1924routine (@code{find_call}) scans through each symbol's machine code,
1925looking for subroutine call instructions, and adding them
1926to the call graph with a zero call count.
1927A topological sort is performed by depth-first numbering
1928all the symbols (@code{cg_dfn.c:cg_dfn}), so that
1929children are always numbered less than their parents,
1930then making a array of pointers into the symbol table and sorting it into
1931numerical order, which is reverse topological
1932order (children appear before parents).
1933Cycles are also detected at this point, all members
1934of which are assigned the same topological number.
1935Two passes are now made through this sorted array of symbol pointers.
1936The first pass, from end to beginning (parents to children),
1937computes the fraction of child time to propogate to each parent
1938and a print flag.
1939The print flag reflects symspec handling of INCL_GRAPH/EXCL_GRAPH,
1940with a parent's include or exclude (print or no print) property
1941being propagated to its children, unless they themselves explicitly appear
1942in INCL_GRAPH or EXCL_GRAPH.
1943A second pass, from beginning to end (children to parents) actually
1944propogates the timings along the call graph, subject
1945to a check against INCL_TIME/EXCL_TIME.
1946With the print flag, fractions, and timings now stored in the symbol
1947structures, the topological sort array is now discarded, and a
1948new array of pointers is assembled, this time sorted by propagated time.
1949
1950Finally, print the various outputs the user requested, which is now fairly
1951straightforward. The call graph (@code{cg_print.c:cg_print}) and
1952flat profile (@code{hist.c:hist_print}) are regurgitations of values
1953already computed. The annotated source listing
1954(@code{basic_blocks.c:print_annotated_source}) uses basic-block
1955information, if present, to label each line of code with call counts,
1956otherwise only the function call counts are presented.
1957
1958The function ordering code is marginally well documented
1959in the source code itself (@code{cg_print.c}). Basically,
1960the functions with the most use and the most parents are
1961placed first, followed by other functions with the most use,
1962followed by lower use functions, followed by unused functions
1963at the end.
1964
1965@node Debugging,,Internals,Details
1966@subsection Debugging @code{gprof}
1967
1968If @code{gprof} was compiled with debugging enabled,
1969the @samp{-d} option triggers debugging output
1970(to stdout) which can be helpful in understanding its operation.
1971The debugging number specified is interpreted as a sum of the following
1972options:
1973
1974@table @asis
1975@item 2 - Topological sort
1976Monitor depth-first numbering of symbols during call graph analysis
1977@item 4 - Cycles
1978Shows symbols as they are identified as cycle heads
1979@item 16 - Tallying
1980As the call graph arcs are read, show each arc and how
1981the total calls to each function are tallied
1982@item 32 - Call graph arc sorting
1983Details sorting individual parents/children within each call graph entry
1984@item 64 - Reading histogram and call graph records
1985Shows address ranges of histograms as they are read, and each
1986call graph arc
1987@item 128 - Symbol table
1988Reading, classifying, and sorting the symbol table from the object file.
1989For line-by-line profiling (@samp{-l} option), also shows line numbers
1990being assigned to memory addresses.
1991@item 256 - Static call graph
1992Trace operation of @samp{-c} option
1993@item 512 - Symbol table and arc table lookups
1994Detail operation of lookup routines
1995@item 1024 - Call graph propagation
1996Shows how function times are propagated along the call graph
1997@item 2048 - Basic-blocks
1998Shows basic-block records as they are read from profile data
1999(only meaningful with @samp{-l} option)
2000@item 4096 - Symspecs
2001Shows symspec-to-symbol pattern matching operation
2002@item 8192 - Annotate source
2003Tracks operation of @samp{-A} option
2004@end table
be4e1cd5 2005
e2fd4231
ILT
2006@contents
2007@bye
2008
2009NEEDS AN INDEX
be4e1cd5
JO
2010
2011-T - "traditional BSD style": How is it different? Should the
2012differences be documented?
2013
be4e1cd5
JO
2014example flat file adds up to 100.01%...
2015
2016note: time estimates now only go out to one decimal place (0.0), where
2017they used to extend two (78.67).
This page took 0.243364 seconds and 4 git commands to generate.