TMF: Add developer documentation for performance tests
[deliverable/tracecompass.git] / org.eclipse.linuxtools.tmf.help / doc / Developer-Guide.mediawiki
1
2 = Introduction =
3
4 The purpose of the '''Tracing Monitoring Framework (TMF)''' is to facilitate the integration of tracing and monitoring tools into Eclipse, to provide out-of-the-box generic functionalities/views and provide extension mechanisms of the base functionalities for application specific purposes.
5
6 = Implementing a New Trace Type =
7
8 The framework can easily be extended to support more trace types. To make a new trace type, one must define the following items:
9
10 * The event type
11 * The trace reader
12 * The trace context
13 * The trace location
14 * (Optional but recommended) The ''org.eclipse.linuxtools.tmf.ui.tracetype'' plug-in extension point
15
16 The '''event type''' must implement an ''ITmfEvent'' or extend a class that implements an ''ITmfEvent''. Typically it will extend ''TmfEvent''. The event type must contain all the data of an event. The '''trace reader''' must be of an ''ITmfTrace'' type. The ''TmfTrace'' class will supply many background operations so that the reader only needs to implement certain functions. The '''trace context''' can be seen as the internals of an iterator. It is required by the trace reader to parse events as it iterates the trace and to keep track of its rank and location. It can have a timestamp, a rank, a file position, or any other element, it should be considered to be ephemeral. The '''trace location''' is an element that is cloned often to store checkpoints, it is generally persistent. It is used to rebuild a context, therefore, it needs to contain enough information to unambiguously point to one and only one event. Finally the ''tracetype'' plug-in extension associates a given trace, non-programmatically to a trace type for use in the UI.
17
18 == An Example: Nexus-lite parser ==
19
20 === Description of the file ===
21
22 This is a very small subset of the nexus trace format, with some changes to make it easier to read. There is one file. This file starts with 64 Strings containing the event names, then an arbitrarily large number of events. The events are each 64 bits long. the first 32 are the timestamp in microseconds, the second 32 are split into 6 bits for the event type, and 26 for the data payload.
23
24 The trace type will be made of two parts, part 1 is the event description, it is just 64 strings, comma seperated and then a line feed.
25
26 <pre>
27 Startup,Stop,Load,Add, ... ,reserved\n
28 </pre>
29
30 Then there will be the events in this format
31
32 {| width= "85%"
33 |style="width: 50%; background-color: #ffffcc;"|timestamp (32 bits)
34 |style="width: 10%; background-color: #ffccff;"|type (6 bits)
35 |style="width: 40%; background-color: #ccffcc;"|payload (26 bits)
36 |-
37 |style="background-color: #ffcccc;" colspan="3"|64 bits total
38 |}
39
40 all events will be the same size (64 bits).
41
42 === NexusLite Plug-in ===
43
44 Create a '''New''', '''Project...''', '''Plug-in Project''', set the title to '''com.example.nexuslite''', click '''Next >''' then click on '''Finish'''.
45
46 Now the structure for the Nexus trace Plug-in is set up.
47
48 Add a dependency to TMF core and UI by opening the '''MANIFEST.MF''' in '''META-INF''', selecting the '''Dependencies''' tab and '''Add ...''' '''org.eclipse.linuxtools.tmf.core''' and '''org.eclipse.linuxtools.tmf.ui'''.
49
50 [[Image:images/NTTAddDepend.png]]<br>
51 [[Image:images/NTTSelectProjects.png]]<br>
52
53 Now the project can access TMF classes.
54
55 === Trace Event ===
56
57 The '''TmfEvent''' class will work for this example. No code required.
58
59 === Trace Reader ===
60
61 The trace reader will extend a '''TmfTrace''' class.
62
63 It will need to implement:
64
65 * validate (is the trace format valid?)
66
67 * initTrace (called as the trace is opened
68
69 * seekEvent (go to a position in the trace and create a context)
70
71 * getNext (implemented in the base class)
72
73 * parseEvent (read the next element in the trace)
74
75 Here is an example implementation of the Nexus Trace file
76
77 <pre>/*******************************************************************************
78 * Copyright (c) 2013 Ericsson
79 *
80 * All rights reserved. This program and the accompanying materials are
81 * made available under the terms of the Eclipse Public License v1.0 which
82 * accompanies this distribution, and is available at
83 * http://www.eclipse.org/legal/epl-v10.html
84 *
85 * Contributors:
86 * Matthew Khouzam - Initial API and implementation
87 *******************************************************************************/
88
89 package com.example.nexuslite;
90
91 import java.io.BufferedReader;
92 import java.io.File;
93 import java.io.FileInputStream;
94 import java.io.FileNotFoundException;
95 import java.io.FileReader;
96 import java.io.IOException;
97 import java.nio.MappedByteBuffer;
98 import java.nio.channels.FileChannel;
99 import java.nio.channels.FileChannel.MapMode;
100
101 import org.eclipse.core.resources.IProject;
102 import org.eclipse.core.resources.IResource;
103 import org.eclipse.core.runtime.IStatus;
104 import org.eclipse.core.runtime.Status;
105 import org.eclipse.linuxtools.tmf.core.event.ITmfEvent;
106 import org.eclipse.linuxtools.tmf.core.event.ITmfEventField;
107 import org.eclipse.linuxtools.tmf.core.event.TmfEvent;
108 import org.eclipse.linuxtools.tmf.core.event.TmfEventField;
109 import org.eclipse.linuxtools.tmf.core.event.TmfEventType;
110 import org.eclipse.linuxtools.tmf.core.exceptions.TmfTraceException;
111 import org.eclipse.linuxtools.tmf.core.timestamp.ITmfTimestamp;
112 import org.eclipse.linuxtools.tmf.core.timestamp.TmfTimestamp;
113 import org.eclipse.linuxtools.tmf.core.trace.ITmfContext;
114 import org.eclipse.linuxtools.tmf.core.trace.ITmfEventParser;
115 import org.eclipse.linuxtools.tmf.core.trace.ITmfLocation;
116 import org.eclipse.linuxtools.tmf.core.trace.TmfContext;
117 import org.eclipse.linuxtools.tmf.core.trace.TmfLongLocation;
118 import org.eclipse.linuxtools.tmf.core.trace.TmfTrace;
119
120 /**
121 * Nexus trace type
122 *
123 * @author Matthew Khouzam
124 */
125 public class NexusTrace extends TmfTrace implements ITmfEventParser {
126
127 private static final int CHUNK_SIZE = 65536; // seems fast on MY system
128 private static final int EVENT_SIZE = 8; // according to spec
129
130 private TmfLongLocation fCurrentLocation;
131 private static final TmfLongLocation NULLLOCATION = new TmfLongLocation(
132 (Long) null);
133 private static final TmfContext NULLCONTEXT = new TmfContext(NULLLOCATION,
134 -1L);
135
136 private long fSize;
137 private long fOffset;
138 private File fFile;
139 private String[] fEventTypes;
140 private FileChannel fFileChannel;
141 private MappedByteBuffer fMappedByteBuffer;
142
143 @Override
144 public IStatus validate(@SuppressWarnings("unused") IProject project,
145 String path) {
146 File f = new File(path);
147 if (!f.exists()) {
148 return new Status(IStatus.ERROR, Activator.PLUGIN_ID,
149 "File does not exist"); //$NON-NLS-1$
150 }
151 if (!f.isFile()) {
152 return new Status(IStatus.ERROR, Activator.PLUGIN_ID, path
153 + " is not a file"); //$NON-NLS-1$
154 }
155 String header = readHeader(f);
156 if (header.split(",", 64).length == 64) { //$NON-NLS-1$
157 return Status.OK_STATUS;
158 }
159 return new Status(IStatus.ERROR, Activator.PLUGIN_ID,
160 "File does not start as a CSV"); //$NON-NLS-1$
161 }
162
163 @Override
164 public ITmfLocation getCurrentLocation() {
165 return fCurrentLocation;
166 }
167
168 @Override
169 public void initTrace(IResource resource, String path,
170 Class<? extends ITmfEvent> type) throws TmfTraceException {
171 super.initTrace(resource, path, type);
172 fFile = new File(path);
173 fSize = fFile.length();
174 if (fSize == 0) {
175 throw new TmfTraceException("file is empty"); //$NON-NLS-1$
176 }
177 String header = readHeader(fFile);
178 if (header == null) {
179 throw new TmfTraceException("File does not start as a CSV"); //$NON-NLS-1$
180 }
181 fEventTypes = header.split(",", 64); // 64 values of types according to //$NON-NLS-1$
182 // the 'spec'
183 if (fEventTypes.length != 64) {
184 throw new TmfTraceException(
185 "Trace header does not contain 64 event names"); //$NON-NLS-1$
186 }
187 if (getNbEvents() < 1) {
188 throw new TmfTraceException("Trace does not have any events"); //$NON-NLS-1$
189 }
190 try {
191 fFileChannel = new FileInputStream(fFile).getChannel();
192 seek(0);
193 } catch (FileNotFoundException e) {
194 throw new TmfTraceException(e.getMessage());
195 } catch (IOException e) {
196 throw new TmfTraceException(e.getMessage());
197 }
198 }
199
200 /**
201 * @return
202 */
203 private String readHeader(File file) {
204 String header = new String();
205 BufferedReader br;
206 try {
207 br = new BufferedReader(new FileReader(file));
208 header = br.readLine();
209 br.close();
210 } catch (IOException e) {
211 return null;
212 }
213 fOffset = header.length() + 1;
214 setNbEvents((fSize - fOffset) / EVENT_SIZE);
215 return header;
216 }
217
218 @Override
219 public double getLocationRatio(ITmfLocation location) {
220 return ((TmfLongLocation) location).getLocationInfo().doubleValue()
221 / getNbEvents();
222 }
223
224 @Override
225 public ITmfContext seekEvent(ITmfLocation location) {
226 TmfLongLocation nl = (TmfLongLocation) location;
227 if (location == null) {
228 nl = new TmfLongLocation(0L);
229 }
230 try {
231 seek(nl.getLocationInfo());
232 } catch (IOException e) {
233 return NULLCONTEXT;
234 }
235 return new TmfContext(nl, nl.getLocationInfo());
236 }
237
238 @Override
239 public ITmfContext seekEvent(double ratio) {
240 long rank = (long) (ratio * getNbEvents());
241 try {
242 seek(rank);
243 } catch (IOException e) {
244 return NULLCONTEXT;
245 }
246 return new TmfContext(new TmfLongLocation(rank), rank);
247 }
248
249 private void seek(long rank) throws IOException {
250 final long position = fOffset + (rank * EVENT_SIZE);
251 int size = Math.min((int) (fFileChannel.size() - position), CHUNK_SIZE);
252 fMappedByteBuffer = fFileChannel.map(MapMode.READ_ONLY, position, size);
253 }
254
255 @Override
256 public ITmfEvent parseEvent(ITmfContext context) {
257 if ((context == null) || (context.getRank() == -1)) {
258 return null;
259 }
260 TmfEvent event = null;
261 long ts = -1;
262 int type = -1;
263 int payload = -1;
264 long pos = context.getRank();
265 if (pos < getNbEvents()) {
266 try {
267 // if we are approaching the limit size, move to a new window
268 if ((fMappedByteBuffer.position() + EVENT_SIZE) > fMappedByteBuffer
269 .limit()) {
270 seek(context.getRank());
271 }
272 /*
273 * the trace format, is:
274 *
275 * - 32 bits for the time,
276 * - 6 for the event type,
277 * - 26 for the data.
278 *
279 * all the 0x00 stuff are masks.
280 */
281
282 /*
283 * it may be interesting to assume if the ts goes back in time,
284 * it actually is rolling over we would need to keep the
285 * previous timestamp for that, keep the high bits and increment
286 * them if the next int ts read is lesser than the previous one
287 */
288
289 ts = 0x00000000ffffffffL & fMappedByteBuffer.getInt();
290
291 long data = 0x00000000ffffffffL & fMappedByteBuffer.getInt();
292 type = (int) (data >> 26) & (0x03f); // first 6 bits
293 payload = (int) (data & 0x003FFFFFFL); // last 26 bits
294 // the time is in microseconds.
295 TmfTimestamp timestamp = new TmfTimestamp(ts, ITmfTimestamp.MICROSECOND_SCALE);
296 final String title = fEventTypes[type];
297 // put the value in a field
298 final TmfEventField tmfEventField = new TmfEventField(
299 "value", payload, null); //$NON-NLS-1$
300 // the field must be in an array
301 final TmfEventField[] fields = new TmfEventField[1];
302 fields[0] = tmfEventField;
303 final TmfEventField content = new TmfEventField(
304 ITmfEventField.ROOT_FIELD_ID, null, fields);
305 // set the current location
306
307 fCurrentLocation = new TmfLongLocation(pos);
308 // create the event
309 event = new TmfEvent(this, pos, timestamp, null,
310 new TmfEventType(title, title, null), content, null);
311 } catch (IOException e) {
312 fCurrentLocation = new TmfLongLocation(-1L);
313 }
314 }
315 return event;
316 }
317 }
318 </pre>
319
320 In this example the '''validate''' function checks if the file exists and is not a directory.
321
322 The '''initTrace''' function will read the event names, and find where the data starts. After this, the number of events is known, and since each event is 8 bytes long according to the specs, the seek is then trivial.
323
324 The '''seek''' here will just reset the reader to the right location.
325
326 The '''parseEvent''' method needs to parse and return the current event and store the current location.
327
328 The '''getNext''' method (in base class) will read the next event and update the context. It calls the '''parseEvent''' method to read the event and update the location. It does not need to be overridden and in this example it is not. The sequence of actions necessary are parse the next event from the trace, create an '''ITmfEvent''' with that data, update the current location, call '''updateAttributes''', update the context then return the event.
329
330 === Trace Context ===
331
332 The trace context will be a '''TmfContext'''
333
334 === Trace Location ===
335
336 The trace location will be a long, representing the rank in the file. The '''TmfLongLocation''' will be the used, once again, no code is required.
337
338 === (Optional but recommended) The ''org.eclipse.linuxtools.tmf.ui.tracetype'' plug-in extension point ===
339
340 One can implement the ''tracetype'' extension in their own plug-in. In this example, the ''com.example.nexuslite'' plug-in will be modified.
341
342 The '''plugin.xml''' file in the ui plug-in needs to be updated if one wants users to access the given event type. It can be updated in the Eclipse plug-in editor.
343
344 # In Extensions tab, add the '''org.eclipse.linuxtools.tmf.ui.tracetype''' extension point.
345 [[Image:images/NTTExtension.png]]<br>
346 [[Image:images/NTTTraceType.png]]<br>
347 [[Image:images/NTTExtensionPoint.png]]<br>
348
349 # Add in the '''org.eclipse.linuxtools.tmf.ui.tracetype''' extension a new type. To do that, '''right click''' on the extension then in the context menu, go to '''New >''', '''type'''.
350
351 [[Image:images/NTTAddType.png]]<br>
352
353 The '''id''' is the unique identifier used to refer to the trace.
354
355 The '''name''' is the field that shall be displayed when a trace type is selected.
356
357 The '''trace type''' is the canonical path refering to the class of the trace.
358
359 The '''event type''' is the canonical path refering to the class of the events of a given trace.
360
361 The '''category''' (optional) is the container in which this trace type will be stored.
362
363 The '''icon''' (optional) is the image to associate with that trace type.
364
365 In the end, the extension menu should look like this.
366
367 [[Image:images/NTTPluginxmlComplete.png]]<br>
368
369 == Best Practices ==
370
371 * Do not load the whole trace in RAM, it will limit the size of the trace that can be read.
372 * Reuse as much code as possible, it makes the trace format much easier to maintain.
373 * Use Eclipse's editor instead of editing the xml directly.
374 * Do not forget Java supports only signed data types, there may be special care needed to handle unsigned data.
375 * Keep all the code in the same plug-in as the ''tracetype'' if it makes sense from a design point of view. It will make integration easier.
376
377 == Download the Code ==
378
379 The plug-in is available [http://wiki.eclipse.org/images/3/34/Com.example.nexuslite.zip here] with a trace generator and a quick test case.
380
381 == Optional Trace Type Attributes ==
382 After defining the trace type as described in the previous chapters it is possible to define optional attributes for the trace type.
383
384 === Default Editor ===
385 The attribute '''defaultEditor''' allows for configuring the editor to use for displaying the events. If omitted, the ''TmfEventsEditor'' is used as default. To configure an editor, first add the '''defaultEditor''' attribute to the trace type in the extension definition. This can be done by selecting the trace type in the plug-in manifest editor. Then click the right mouse button and select '''New -> defaultEditor''' in the context sensitive menu. Then select the newly added attribute. Now you can specify the editor id to use on the right side of the manifest editor. For example, this attribute could be used to implement an extension of the class ''org.eclipse.ui.part.MultiPageEditor''. The first page could use the ''TmfEventsEditor''' to display the events in a table as usual and other pages can display other aspects of the trace.
386
387 === Events Table Type ===
388 The attribute '''eventsTableType''' allows for configuring the events table class to use in the default events editor. If omitted, the default events table will be used. To configure a trace type specific events table, first add the '''eventsTableType''' attribute to the trace type in the extension definition. This can be done by selecting the trace type in the plug-in manifest editor. Then click the right mouse button and select '''New -> eventsTableType''' in the context sensitive menu. Then select the newly added attribute and click on ''class'' on the right side of the manifest editor. The new class wizard will open. The ''superclass'' field will be already filled with the class ''org.eclipse.linuxtools.tmf.ui.viewers.events.TmfEventsTable''. Using this attribute a table with different columns than the default columns can be defined. See class org.eclipse.linuxtools.internal.lttng2.kernel.ui.viewers.events.Lttng2EventsTable for an example implementation.
389
390 === Statistics Viewer Type ===
391 The attribute '''statisticsViewerType''' allows for defining trace type specific statistics. If omitted, only the default statistics will be displayed in the ''Statistics'' view (part of the ''Tracing'' view category). By default this view displays the total number of events and the number of events per event type for the whole trace and for the selected time range. To configure trace type specific statistics, first add the '''statisticsViewerType''' attribute to the trace type in the extension definition. This can be done by selecting the trace type in the plug-in manifest editor. Then click the right mouse button and select '''New -> statisticsViewerType''' in the context sensitive menu. Then select the newly added attribute and click on ''class'' on the right side of the manifest editor. The new class wizard will open. The ''superclass'' field will be already filled with the class ''org.eclipse.linuxtools.tmf.ui.viewers.statistics.TmfStatisticsViewer''. Now overwrite the relevant methods to provide the trace specific statistics. When executing the plug-in extension in Eclipse and opening the ''Statistics'' view the ''Statistics'' view will show an additional tab beside the global tab that shows the default statistics. The new tab will display the trace specific statistics provided in the ''TmfStatisticsViewer'' sub-class implementation.
392
393 = View Tutorial =
394
395 This tutorial describes how to create a simple view using the TMF framework and the SWTChart library. SWTChart is a library based on SWT that can draw several types of charts including a line chart which we will use in this tutorial. We will create a view containing a line chart that displays time stamps on the X axis and the corresponding event values on the Y axis.
396
397 This tutorial will cover concepts like:
398
399 * Extending TmfView
400 * Signal handling (@TmfSignalHandler)
401 * Data requests (TmfEventRequest)
402 * SWTChart integration
403
404 === Prerequisites ===
405
406 The tutorial is based on Eclipse 4.4 (Eclipse Luna), TMF 3.0.0 and SWTChart 0.7.0. If you are using TMF from the source repository, SWTChart is already included in the target definition file (see org.eclipse.linuxtools.lttng.target). You can also install it manually by using the Orbit update site. http://download.eclipse.org/tools/orbit/downloads/
407
408 === Creating an Eclipse UI Plug-in ===
409
410 To create a new project with name org.eclipse.linuxtools.tmf.sample.ui select '''File -> New -> Project -> Plug-in Development -> Plug-in Project'''. <br>
411 [[Image:images/Screenshot-NewPlug-inProject1.png]]<br>
412
413 [[Image:images/Screenshot-NewPlug-inProject2.png]]<br>
414
415 [[Image:images/Screenshot-NewPlug-inProject3.png]]<br>
416
417 === Creating a View ===
418
419 To open the plug-in manifest, double-click on the MANIFEST.MF file. <br>
420 [[Image:images/SelectManifest.png]]<br>
421
422 Change to the Dependencies tab and select '''Add...''' of the ''Required Plug-ins'' section. A new dialog box will open. Next find plug-in ''org.eclipse.linuxtools.tmf.core'' and press '''OK'''<br>
423 Following the same steps, add ''org.eclipse.linuxtools.tmf.ui'' and ''org.swtchart''.<br>
424 [[Image:images/AddDependencyTmfUi.png]]<br>
425
426 Change to the Extensions tab and select '''Add...''' of the ''All Extension'' section. A new dialog box will open. Find the view extension ''org.eclipse.ui.views'' and press '''Finish'''.<br>
427 [[Image:images/AddViewExtension1.png]]<br>
428
429 To create a view, click the right mouse button. Then select '''New -> view'''<br>
430 [[Image:images/AddViewExtension2.png]]<br>
431
432 A new view entry has been created. Fill in the fields ''id'' and ''name''. For ''class'' click on the '''class hyperlink''' and it will show the New Java Class dialog. Enter the name ''SampleView'', change the superclass to ''TmfView'' and click Finish. This will create the source file and fill the ''class'' field in the process. We use TmfView as the superclass because it provides extra functionality like getting the active trace, pinning and it has support for signal handling between components.<br>
433 [[Image:images/FillSampleViewExtension.png]]<br>
434
435 This will generate an empty class. Once the quick fixes are applied, the following code is obtained:
436
437 <pre>
438 package org.eclipse.linuxtools.tmf.sample.ui;
439
440 import org.eclipse.swt.widgets.Composite;
441 import org.eclipse.ui.part.ViewPart;
442
443 public class SampleView extends TmfView {
444
445 public SampleView(String viewName) {
446 super(viewName);
447 // TODO Auto-generated constructor stub
448 }
449
450 @Override
451 public void createPartControl(Composite parent) {
452 // TODO Auto-generated method stub
453
454 }
455
456 @Override
457 public void setFocus() {
458 // TODO Auto-generated method stub
459
460 }
461
462 }
463 </pre>
464
465 This creates an empty view, however the basic structure is now is place.
466
467 === Implementing a view ===
468
469 We will start by adding a empty chart then it will need to be populated with the trace data. Finally, we will make the chart more visually pleasing by adjusting the range and formating the time stamps.
470
471 ==== Adding an Empty Chart ====
472
473 First, we can add an empty chart to the view and initialize some of its components.
474
475 <pre>
476 private static final String SERIES_NAME = "Series";
477 private static final String Y_AXIS_TITLE = "Signal";
478 private static final String X_AXIS_TITLE = "Time";
479 private static final String FIELD = "value"; // The name of the field that we want to display on the Y axis
480 private static final String VIEW_ID = "org.eclipse.linuxtools.tmf.sample.ui.view";
481 private Chart chart;
482 private ITmfTrace currentTrace;
483
484 public SampleView() {
485 super(VIEW_ID);
486 }
487
488 @Override
489 public void createPartControl(Composite parent) {
490 chart = new Chart(parent, SWT.BORDER);
491 chart.getTitle().setVisible(false);
492 chart.getAxisSet().getXAxis(0).getTitle().setText(X_AXIS_TITLE);
493 chart.getAxisSet().getYAxis(0).getTitle().setText(Y_AXIS_TITLE);
494 chart.getSeriesSet().createSeries(SeriesType.LINE, SERIES_NAME);
495 chart.getLegend().setVisible(false);
496 }
497
498 @Override
499 public void setFocus() {
500 chart.setFocus();
501 }
502 </pre>
503
504 The view is prepared. Run the Example. To launch the an Eclipse Application select the ''Overview'' tab and click on '''Launch an Eclipse Application'''<br>
505 [[Image:images/RunEclipseApplication.png]]<br>
506
507 A new Eclipse application window will show. In the new window go to '''Windows -> Show View -> Other... -> Other -> Sample View'''.<br>
508 [[Image:images/ShowViewOther.png]]<br>
509
510 You should now see a view containing an empty chart<br>
511 [[Image:images/EmptySampleView.png]]<br>
512
513 ==== Signal Handling ====
514
515 We would like to populate the view when a trace is selected. To achieve this, we can use a signal hander which is specified with the '''@TmfSignalHandler''' annotation.
516
517 <pre>
518 @TmfSignalHandler
519 public void traceSelected(final TmfTraceSelectedSignal signal) {
520
521 }
522 </pre>
523
524 ==== Requesting Data ====
525
526 Then we need to actually gather data from the trace. This is done asynchronously using a ''TmfEventRequest''
527
528 <pre>
529 @TmfSignalHandler
530 public void traceSelected(final TmfTraceSelectedSignal signal) {
531 // Don't populate the view again if we're already showing this trace
532 if (currentTrace == signal.getTrace()) {
533 return;
534 }
535 currentTrace = signal.getTrace();
536
537 // Create the request to get data from the trace
538
539 TmfEventRequest req = new TmfEventRequest(TmfEvent.class,
540 TmfTimeRange.ETERNITY, 0, ITmfEventRequest.ALL_DATA,
541 ITmfEventRequest.ExecutionType.BACKGROUND) {
542
543 @Override
544 public void handleData(ITmfEvent data) {
545 // Called for each event
546 super.handleData(data);
547 }
548
549 @Override
550 public void handleSuccess() {
551 // Request successful, not more data available
552 super.handleSuccess();
553 }
554
555 @Override
556 public void handleFailure() {
557 // Request failed, not more data available
558 super.handleFailure();
559 }
560 };
561 ITmfTrace trace = signal.getTrace();
562 trace.sendRequest(req);
563 }
564 </pre>
565
566 ==== Transferring Data to the Chart ====
567
568 The chart expects an array of doubles for both the X and Y axis values. To provide that, we can accumulate each event's time and value in their respective list then convert the list to arrays when all events are processed.
569
570 <pre>
571 TmfEventRequest req = new TmfEventRequest(TmfEvent.class,
572 TmfTimeRange.ETERNITY, 0, ITmfEventRequest.ALL_DATA,
573 ITmfEventRequest.ExecutionType.BACKGROUND) {
574
575 ArrayList<Double> xValues = new ArrayList<Double>();
576 ArrayList<Double> yValues = new ArrayList<Double>();
577
578 @Override
579 public void handleData(ITmfEvent data) {
580 // Called for each event
581 super.handleData(data);
582 ITmfEventField field = data.getContent().getField(FIELD);
583 if (field != null) {
584 yValues.add((Double) field.getValue());
585 xValues.add((double) data.getTimestamp().getValue());
586 }
587 }
588
589 @Override
590 public void handleSuccess() {
591 // Request successful, not more data available
592 super.handleSuccess();
593
594 final double x[] = toArray(xValues);
595 final double y[] = toArray(yValues);
596
597 // This part needs to run on the UI thread since it updates the chart SWT control
598 Display.getDefault().asyncExec(new Runnable() {
599
600 @Override
601 public void run() {
602 chart.getSeriesSet().getSeries()[0].setXSeries(x);
603 chart.getSeriesSet().getSeries()[0].setYSeries(y);
604
605 chart.redraw();
606 }
607
608 });
609 }
610
611 /**
612 * Convert List<Double> to double[]
613 */
614 private double[] toArray(List<Double> list) {
615 double[] d = new double[list.size()];
616 for (int i = 0; i < list.size(); ++i) {
617 d[i] = list.get(i);
618 }
619
620 return d;
621 }
622 };
623 </pre>
624
625 ==== Adjusting the Range ====
626
627 The chart now contains values but they might be out of range and not visible. We can adjust the range of each axis by computing the minimum and maximum values as we add events.
628
629 <pre>
630
631 ArrayList<Double> xValues = new ArrayList<Double>();
632 ArrayList<Double> yValues = new ArrayList<Double>();
633 private double maxY = -Double.MAX_VALUE;
634 private double minY = Double.MAX_VALUE;
635 private double maxX = -Double.MAX_VALUE;
636 private double minX = Double.MAX_VALUE;
637
638 @Override
639 public void handleData(ITmfEvent data) {
640 super.handleData(data);
641 ITmfEventField field = data.getContent().getField(FIELD);
642 if (field != null) {
643 Double yValue = (Double) field.getValue();
644 minY = Math.min(minY, yValue);
645 maxY = Math.max(maxY, yValue);
646 yValues.add(yValue);
647
648 double xValue = (double) data.getTimestamp().getValue();
649 xValues.add(xValue);
650 minX = Math.min(minX, xValue);
651 maxX = Math.max(maxX, xValue);
652 }
653 }
654
655 @Override
656 public void handleSuccess() {
657 super.handleSuccess();
658 final double x[] = toArray(xValues);
659 final double y[] = toArray(yValues);
660
661 // This part needs to run on the UI thread since it updates the chart SWT control
662 Display.getDefault().asyncExec(new Runnable() {
663
664 @Override
665 public void run() {
666 chart.getSeriesSet().getSeries()[0].setXSeries(x);
667 chart.getSeriesSet().getSeries()[0].setYSeries(y);
668
669 // Set the new range
670 if (!xValues.isEmpty() && !yValues.isEmpty()) {
671 chart.getAxisSet().getXAxis(0).setRange(new Range(0, x[x.length - 1]));
672 chart.getAxisSet().getYAxis(0).setRange(new Range(minY, maxY));
673 } else {
674 chart.getAxisSet().getXAxis(0).setRange(new Range(0, 1));
675 chart.getAxisSet().getYAxis(0).setRange(new Range(0, 1));
676 }
677 chart.getAxisSet().adjustRange();
678
679 chart.redraw();
680 }
681 });
682 }
683 </pre>
684
685 ==== Formatting the Time Stamps ====
686
687 To display the time stamps on the X axis nicely, we need to specify a format or else the time stamps will be displayed as ''long''. We use TmfTimestampFormat to make it consistent with the other TMF views. We also need to handle the '''TmfTimestampFormatUpdateSignal''' to make sure that the time stamps update when the preferences change.
688
689 <pre>
690 @Override
691 public void createPartControl(Composite parent) {
692 ...
693
694 chart.getAxisSet().getXAxis(0).getTick().setFormat(new TmfChartTimeStampFormat());
695 }
696
697 public class TmfChartTimeStampFormat extends SimpleDateFormat {
698 private static final long serialVersionUID = 1L;
699 @Override
700 public StringBuffer format(Date date, StringBuffer toAppendTo, FieldPosition fieldPosition) {
701 long time = date.getTime();
702 toAppendTo.append(TmfTimestampFormat.getDefaulTimeFormat().format(time));
703 return toAppendTo;
704 }
705 }
706
707 @TmfSignalHandler
708 public void timestampFormatUpdated(TmfTimestampFormatUpdateSignal signal) {
709 // Called when the time stamp preference is changed
710 chart.getAxisSet().getXAxis(0).getTick().setFormat(new TmfChartTimeStampFormat());
711 chart.redraw();
712 }
713 </pre>
714
715 We also need to populate the view when a trace is already selected and the view is opened. We can reuse the same code by having the view send the '''TmfTraceSelectedSignal''' to itself.
716
717 <pre>
718 @Override
719 public void createPartControl(Composite parent) {
720 ...
721
722 ITmfTrace trace = getActiveTrace();
723 if (trace != null) {
724 traceSelected(new TmfTraceSelectedSignal(this, trace));
725 }
726 }
727 </pre>
728
729 The view is now ready but we need a proper trace to test it. For this example, a trace was generated using LTTng-UST so that it would produce a sine function.<br>
730
731 [[Image:images/SampleView.png]]<br>
732
733 In summary, we have implemented a simple TMF view using the SWTChart library. We made use of signals and requests to populate the view at the appropriate time and we formated the time stamps nicely. We also made sure that the time stamp format is updated when the preferences change.
734
735 = Component Interaction =
736
737 TMF provides a mechanism for different components to interact with each other using signals. The signals can carry information that is specific to each signal.
738
739 The TMF Signal Manager handles registration of components and the broadcasting of signals to their intended receivers.
740
741 Components can register as VIP receivers which will ensure they will receive the signal before non-VIP receivers.
742
743 == Sending Signals ==
744
745 In order to send a signal, an instance of the signal must be created and passed as argument to the signal manager to be dispatched. Every component that can handle the signal will receive it. The receivers do not need to be known by the sender.
746
747 <pre>
748 TmfExampleSignal signal = new TmfExampleSignal(this, ...);
749 TmfSignalManager.dispatchSignal(signal);
750 </pre>
751
752 If the sender is an instance of the class TmfComponent, the broadcast method can be used:
753
754 <pre>
755 TmfExampleSignal signal = new TmfExampleSignal(this, ...);
756 broadcast(signal);
757 </pre>
758
759 == Receiving Signals ==
760
761 In order to receive any signal, the receiver must first be registered with the signal manager. The receiver can register as a normal or VIP receiver.
762
763 <pre>
764 TmfSignalManager.register(this);
765 TmfSignalManager.registerVIP(this);
766 </pre>
767
768 If the receiver is an instance of the class TmfComponent, it is automatically registered as a normal receiver in the constructor.
769
770 When the receiver is destroyed or disposed, it should deregister itself from the signal manager.
771
772 <pre>
773 TmfSignalManager.deregister(this);
774 </pre>
775
776 To actually receive and handle any specific signal, the receiver must use the @TmfSignalHandler annotation and implement a method that will be called when the signal is broadcast. The name of the method is irrelevant.
777
778 <pre>
779 @TmfSignalHandler
780 public void example(TmfExampleSignal signal) {
781 ...
782 }
783 </pre>
784
785 The source of the signal can be used, if necessary, by a component to filter out and ignore a signal that was broadcast by itself when the component is also a receiver of the signal but only needs to handle it when it was sent by another component or another instance of the component.
786
787 == Signal Throttling ==
788
789 It is possible for a TmfComponent instance to buffer the dispatching of signals so that only the last signal queued after a specified delay without any other signal queued is sent to the receivers. All signals that are preempted by a newer signal within the delay are discarded.
790
791 The signal throttler must first be initialized:
792
793 <pre>
794 final int delay = 100; // in ms
795 TmfSignalThrottler throttler = new TmfSignalThrottler(this, delay);
796 </pre>
797
798 Then the sending of signals should be queued through the throttler:
799
800 <pre>
801 TmfExampleSignal signal = new TmfExampleSignal(this, ...);
802 throttler.queue(signal);
803 </pre>
804
805 When the throttler is no longer needed, it should be disposed:
806
807 <pre>
808 throttler.dispose();
809 </pre>
810
811 == Signal Reference ==
812
813 The following is a list of built-in signals defined in the framework.
814
815 === TmfStartSynchSignal ===
816
817 ''Purpose''
818
819 This signal is used to indicate the start of broadcasting of a signal. Internally, the data provider will not fire event requests until the corresponding TmfEndSynchSignal signal is received. This allows coalescing of requests triggered by multiple receivers of the broadcast signal.
820
821 ''Senders''
822
823 Sent by TmfSignalManager before dispatching a signal to all receivers.
824
825 ''Receivers''
826
827 Received by TmfDataProvider.
828
829 === TmfEndSynchSignal ===
830
831 ''Purpose''
832
833 This signal is used to indicate the end of broadcasting of a signal. Internally, the data provider fire all pending event requests that were received and buffered since the corresponding TmfStartSynchSignal signal was received. This allows coalescing of requests triggered by multiple receivers of the broadcast signal.
834
835 ''Senders''
836
837 Sent by TmfSignalManager after dispatching a signal to all receivers.
838
839 ''Receivers''
840
841 Received by TmfDataProvider.
842
843 === TmfTraceOpenedSignal ===
844
845 ''Purpose''
846
847 This signal is used to indicate that a trace has been opened in an editor.
848
849 ''Senders''
850
851 Sent by a TmfEventsEditor instance when it is created.
852
853 ''Receivers''
854
855 Received by TmfTrace, TmfExperiment, TmfTraceManager and every view that shows trace data. Components that show trace data should handle this signal.
856
857 === TmfTraceSelectedSignal ===
858
859 ''Purpose''
860
861 This signal is used to indicate that a trace has become the currently selected trace.
862
863 ''Senders''
864
865 Sent by a TmfEventsEditor instance when it receives focus. Components can send this signal to make a trace editor be brought to front.
866
867 ''Receivers''
868
869 Received by TmfTraceManager and every view that shows trace data. Components that show trace data should handle this signal.
870
871 === TmfTraceClosedSignal ===
872
873 ''Purpose''
874
875 This signal is used to indicate that a trace editor has been closed.
876
877 ''Senders''
878
879 Sent by a TmfEventsEditor instance when it is disposed.
880
881 ''Receivers''
882
883 Received by TmfTraceManager and every view that shows trace data. Components that show trace data should handle this signal.
884
885 === TmfTraceRangeUpdatedSignal ===
886
887 ''Purpose''
888
889 This signal is used to indicate that the valid time range of a trace has been updated. This triggers indexing of the trace up to the end of the range. In the context of streaming, this end time is considered a safe time up to which all events are guaranteed to have been completely received. For non-streaming traces, the end time is set to infinity indicating that all events can be read immediately. Any processing of trace events that wants to take advantage of request coalescing should be triggered by this signal.
890
891 ''Senders''
892
893 Sent by TmfExperiment and non-streaming TmfTrace. Streaming traces should send this signal in the TmfTrace subclass when a new safe time is determined by a specific implementation.
894
895 ''Receivers''
896
897 Received by TmfTrace, TmfExperiment and components that process trace events. Components that need to process trace events should handle this signal.
898
899 === TmfTraceUpdatedSignal ===
900
901 ''Purpose''
902
903 This signal is used to indicate that new events have been indexed for a trace.
904
905 ''Senders''
906
907 Sent by TmfCheckpointIndexer when new events have been indexed and the number of events has changed.
908
909 ''Receivers''
910
911 Received by components that need to be notified of a new trace event count.
912
913 === TmfTimeSynchSignal ===
914
915 ''Purpose''
916
917 This signal is used to indicate that a new time or time range has been
918 selected. It contains a begin and end time. If a single time is selected then
919 the begin and end time are the same.
920
921 ''Senders''
922
923 Sent by any component that allows the user to select a time or time range.
924
925 ''Receivers''
926
927 Received by any component that needs to be notified of the currently selected time or time range.
928
929 === TmfRangeSynchSignal ===
930
931 ''Purpose''
932
933 This signal is used to indicate that a new time range window has been set.
934
935 ''Senders''
936
937 Sent by any component that allows the user to set a time range window.
938
939 ''Receivers''
940
941 Received by any component that needs to be notified of the current visible time range window.
942
943 === TmfEventFilterAppliedSignal ===
944
945 ''Purpose''
946
947 This signal is used to indicate that a filter has been applied to a trace.
948
949 ''Senders''
950
951 Sent by TmfEventsTable when a filter is applied.
952
953 ''Receivers''
954
955 Received by any component that shows trace data and needs to be notified of applied filters.
956
957 === TmfEventSearchAppliedSignal ===
958
959 ''Purpose''
960
961 This signal is used to indicate that a search has been applied to a trace.
962
963 ''Senders''
964
965 Sent by TmfEventsTable when a search is applied.
966
967 ''Receivers''
968
969 Received by any component that shows trace data and needs to be notified of applied searches.
970
971 === TmfTimestampFormatUpdateSignal ===
972
973 ''Purpose''
974
975 This signal is used to indicate that the timestamp format preference has been updated.
976
977 ''Senders''
978
979 Sent by TmfTimestampFormat when the default timestamp format preference is changed.
980
981 ''Receivers''
982
983 Received by any component that needs to refresh its display for the new timestamp format.
984
985 === TmfStatsUpdatedSignal ===
986
987 ''Purpose''
988
989 This signal is used to indicate that the statistics data model has been updated.
990
991 ''Senders''
992
993 Sent by statistic providers when new statistics data has been processed.
994
995 ''Receivers''
996
997 Received by statistics viewers and any component that needs to be notified of a statistics update.
998
999 == Debugging ==
1000
1001 TMF has built-in Eclipse tracing support for the debugging of signal interaction between components. To enable it, open the '''Run/Debug Configuration...''' dialog, select a configuration, click the '''Tracing''' tab, select the plug-in '''org.eclipse.linuxtools.tmf.core''', and check the '''signal''' item.
1002
1003 All signals sent and received will be logged to the file TmfTrace.log located in the Eclipse home directory.
1004
1005 = Generic State System =
1006
1007 == Introduction ==
1008
1009 The Generic State System is a utility available in TMF to track different states
1010 over the duration of a trace. It works by first sending some or all events of
1011 the trace into a state provider, which defines the state changes for a given
1012 trace type. Once built, views and analysis modules can then query the resulting
1013 database of states (called "state history") to get information.
1014
1015 For example, let's suppose we have the following sequence of events in a kernel
1016 trace:
1017
1018 10 s, sys_open, fd = 5, file = /home/user/myfile
1019 ...
1020 15 s, sys_read, fd = 5, size=32
1021 ...
1022 20 s, sys_close, fd = 5
1023
1024 Now let's say we want to implement an analysis module which will track the
1025 amount of bytes read and written to eachfile. Here, of course the sys_read is
1026 interesting. However, by just looking at that event, we have no information on
1027 which file is being read, only its fd (5) is known. To get the match
1028 fd5 = /home/user/myfile, we have to go back to the sys_open event which happens
1029 5 seconds earlier.
1030
1031 But since we don't know exactly where this sys_open event is, we will have to go
1032 back to the very start of the trace, and look through events one by one! This is
1033 obviously not efficient, and will not scale well if we want to analyze many
1034 similar patterns, or for very large traces.
1035
1036 A solution in this case would be to use the state system to keep track of the
1037 amount of bytes read/written to every *filename* (instead of every file
1038 descriptor, like we get from the events). Then the module could ask the state
1039 system "what is the amount of bytes read for file "/home/user/myfile" at time
1040 16 s", and it would return the answer "32" (assuming there is no other read
1041 than the one shown).
1042
1043 == High-level components ==
1044
1045 The State System infrastructure is composed of 3 parts:
1046 * The state provider
1047 * The central state system
1048 * The storage backend
1049
1050 The state provider is the customizable part. This is where the mapping from
1051 trace events to state changes is done. This is what you want to implement for
1052 your specific trace type and analysis type. It's represented by the
1053 ITmfStateProvider interface (with a threaded implementation in
1054 AbstractTmfStateProvider, which you can extend).
1055
1056 The core of the state system is exposed through the ITmfStateSystem and
1057 ITmfStateSystemBuilder interfaces. The former allows only read-only access and
1058 is typically used for views doing queries. The latter also allows writing to the
1059 state history, and is typically used by the state provider.
1060
1061 Finally, each state system has its own separate backend. This determines how the
1062 intervals, or the "state history", are saved (in RAM, on disk, etc.) You can
1063 select the type of backend at construction time in the TmfStateSystemFactory.
1064
1065 == Definitions ==
1066
1067 Before we dig into how to use the state system, we should go over some useful
1068 definitions:
1069
1070 === Attribute ===
1071
1072 An attribute is the smallest element of the model that can be in any particular
1073 state. When we refer to the "full state", in fact it means we are interested in
1074 the state of every single attribute of the model.
1075
1076 === Attribute Tree ===
1077
1078 Attributes in the model can be placed in a tree-like structure, a bit like files
1079 and directories in a file system. However, note that an attribute can always
1080 have both a value and sub-attributes, so they are like files and directories at
1081 the same time. We are then able to refer to every single attribute with its
1082 path in the tree.
1083
1084 For example, in the attribute tree for LTTng kernel traces, we use the following
1085 attributes, among others:
1086
1087 <pre>
1088 |- Processes
1089 | |- 1000
1090 | | |- PPID
1091 | | |- Exec_name
1092 | |- 1001
1093 | | |- PPID
1094 | | |- Exec_name
1095 | ...
1096 |- CPUs
1097 |- 0
1098 | |- Status
1099 | |- Current_pid
1100 ...
1101 </pre>
1102
1103 In this model, the attribute "Processes/1000/PPID" refers to the PPID of process
1104 with PID 1000. The attribute "CPUs/0/Status" represents the status (running,
1105 idle, etc.) of CPU 0. "Processes/1000/PPID" and "Processes/1001/PPID" are two
1106 different attribute, even though their base name is the same: the whole path is
1107 the unique identifier.
1108
1109 The value of each attribute can change over the duration of the trace,
1110 independently of the other ones, and independently of its position in the tree.
1111
1112 The tree-like organization is optional, all attributes could be at the same
1113 level. But it's possible to put them in a tree, and it helps make things
1114 clearer.
1115
1116 === Quark ===
1117
1118 In addition to a given path, each attribute also has a unique integer
1119 identifier, called the "quark". To continue with the file system analogy, this
1120 is like the inode number. When a new attribute is created, a new unique quark
1121 will be assigned automatically. They are assigned incrementally, so they will
1122 normally be equal to their order of creation, starting at 0.
1123
1124 Methods are offered to get the quark of an attribute from its path. The API
1125 methods for inserting state changes and doing queries normally use quarks
1126 instead of paths. This is to encourage users to cache the quarks and re-use
1127 them, which avoids re-walking the attribute tree over and over, which avoids
1128 unneeded hashing of strings.
1129
1130 === State value ===
1131
1132 The path and quark of an attribute will remain constant for the whole duration
1133 of the trace. However, the value carried by the attribute will change. The value
1134 of a specific attribute at a specific time is called the state value.
1135
1136 In the TMF implementation, state values can be integers, longs, doubles, or strings.
1137 There is also a "null value" type, which is used to indicate that no particular
1138 value is active for this attribute at this time, but without resorting to a
1139 'null' reference.
1140
1141 Any other type of value could be used, as long as the backend knows how to store
1142 it.
1143
1144 Note that the TMF implementation also forces every attribute to always carry the
1145 same type of state value. This is to make it simpler for views, so they can
1146 expect that an attribute will always use a given type, without having to check
1147 every single time. Null values are an exception, they are always allowed for all
1148 attributes, since they can safely be "unboxed" into all types.
1149
1150 === State change ===
1151
1152 A state change is the element that is inserted in the state system. It consists
1153 of:
1154 * a timestamp (the time at which the state change occurs)
1155 * an attribute (the attribute whose value will change)
1156 * a state value (the new value that the attribute will carry)
1157
1158 It's not an object per se in the TMF implementation (it's represented by a
1159 function call in the state provider). Typically, the state provider will insert
1160 zero, one or more state changes for every trace event, depending on its event
1161 type, payload, etc.
1162
1163 Note, we use "timestamp" here, but it's in fact a generic term that could be
1164 referred to as "index". For example, if a given trace type has no notion of
1165 timestamp, the event rank could be used.
1166
1167 In the TMF implementation, the timestamp is a long (64-bit integer).
1168
1169 === State interval ===
1170
1171 State changes are inserted into the state system, but state intervals are the
1172 objects that come out on the other side. Those are stocked in the storage
1173 backend. A state interval represents a "state" of an attribute we want to track.
1174 When doing queries on the state system, intervals are what is returned. The
1175 components of a state interval are:
1176 * Start time
1177 * End time
1178 * State value
1179 * Quark
1180
1181 The start and end times represent the time range of the state. The state value
1182 is the same as the state value in the state change that started this interval.
1183 The interval also keeps a reference to its quark, although you normally know
1184 your quark in advance when you do queries.
1185
1186 === State history ===
1187
1188 The state history is the name of the container for all the intervals created by
1189 the state system. The exact implementation (how the intervals are stored) is
1190 determined by the storage backend that is used.
1191
1192 Some backends will use a state history that is peristent on disk, others do not.
1193 When loading a trace, if a history file is available and the backend supports
1194 it, it will be loaded right away, skipping the need to go through another
1195 construction phase.
1196
1197 === Construction phase ===
1198
1199 Before we can query a state system, we need to build the state history first. To
1200 do so, trace events are sent one-by-one through the state provider, which in
1201 turn sends state changes to the central component, which then creates intervals
1202 and stores them in the backend. This is called the construction phase.
1203
1204 Note that the state system needs to receive its events into chronological order.
1205 This phase will end once the end of the trace is reached.
1206
1207 Also note that it is possible to query the state system while it is being build.
1208 Any timestamp between the start of the trace and the current end time of the
1209 state system (available with ITmfStateSystem#getCurrentEndTime()) is a valid
1210 timestamp that can be queried.
1211
1212 === Queries ===
1213
1214 As mentioned previously, when doing queries on the state system, the returned
1215 objects will be state intervals. In most cases it's the state *value* we are
1216 interested in, but since the backend has to instantiate the interval object
1217 anyway, there is no additional cost to return the interval instead. This way we
1218 also get the start and end times of the state "for free".
1219
1220 There are two types of queries that can be done on the state system:
1221
1222 ==== Full queries ====
1223
1224 A full query means that we want to retrieve the whole state of the model for one
1225 given timestamp. As we remember, this means "the state of every single attribute
1226 in the model". As parameter we only need to pass the timestamp (see the API
1227 methods below). The return value will be an array of intervals, where the offset
1228 in the array represents the quark of each attribute.
1229
1230 ==== Single queries ====
1231
1232 In other cases, we might only be interested in the state of one particular
1233 attribute at one given timestamp. For these cases it's better to use a
1234 single query. For a single query. we need to pass both a timestamp and a
1235 quark in parameter. The return value will be a single interval, representing
1236 the state that this particular attribute was at that time.
1237
1238 Single queries are typically faster than full queries (but once again, this
1239 depends on the backend that is used), but not by much. Even if you only want the
1240 state of say 10 attributes out of 200, it could be faster to use a full query
1241 and only read the ones you need. Single queries should be used for cases where
1242 you only want one attribute per timestamp (for example, if you follow the state
1243 of the same attribute over a time range).
1244
1245
1246 == Relevant interfaces/classes ==
1247
1248 This section will describe the public interface and classes that can be used if
1249 you want to use the state system.
1250
1251 === Main classes in org.eclipse.linuxtools.tmf.core.statesystem ===
1252
1253 ==== ITmfStateProvider / AbstractTmfStateProvider ====
1254
1255 ITmfStateProvider is the interface you have to implement to define your state
1256 provider. This is where most of the work has to be done to use a state system
1257 for a custom trace type or analysis type.
1258
1259 For first-time users, it's recommended to extend AbstractTmfStateProvider
1260 instead. This class takes care of all the initialization mumbo-jumbo, and also
1261 runs the event handler in a separate thread. You will only need to implement
1262 eventHandle, which is the call-back that will be called for every event in the
1263 trace.
1264
1265 For an example, you can look at StatsStateProvider in the TMF tree, or at the
1266 small example below.
1267
1268 ==== TmfStateSystemFactory ====
1269
1270 Once you have defined your state provider, you need to tell your trace type to
1271 build a state system with this provider during its initialization. This consists
1272 of overriding TmfTrace#buildStateSystems() and in there of calling the method in
1273 TmfStateSystemFactory that corresponds to the storage backend you want to use
1274 (see the section [[#Comparison of state system backends]]).
1275
1276 You will have to pass in parameter the state provider you want to use, which you
1277 should have defined already. Each backend can also ask for more configuration
1278 information.
1279
1280 You must then call registerStateSystem(id, statesystem) to make your state
1281 system visible to the trace objects and the views. The ID can be any string of
1282 your choosing. To access this particular state system, the views or modules will
1283 need to use this ID.
1284
1285 Also, don't forget to call super.buildStateSystems() in your implementation,
1286 unless you know for sure you want to skip the state providers built by the
1287 super-classes.
1288
1289 You can look at how LttngKernelTrace does it for an example. It could also be
1290 possible to build a state system only under certain conditions (like only if the
1291 trace contains certain event types).
1292
1293
1294 ==== ITmfStateSystem ====
1295
1296 ITmfStateSystem is the main interface through which views or analysis modules
1297 will access the state system. It offers a read-only view of the state system,
1298 which means that no states can be inserted, and no attributes can be created.
1299 Calling TmfTrace#getStateSystems().get(id) will return you a ITmfStateSystem
1300 view of the requested state system. The main methods of interest are:
1301
1302 ===== getQuarkAbsolute()/getQuarkRelative() =====
1303
1304 Those are the basic quark-getting methods. The goal of the state system is to
1305 return the state values of given attributes at given timestamps. As we've seen
1306 earlier, attributes can be described with a file-system-like path. The goal of
1307 these methods is to convert from the path representation of the attribute to its
1308 quark.
1309
1310 Since quarks are created on-the-fly, there is no guarantee that the same
1311 attributes will have the same quark for two traces of the same type. The views
1312 should always query their quarks when dealing with a new trace or a new state
1313 provider. Beyond that however, quarks should be cached and reused as much as
1314 possible, to avoid potentially costly string re-hashing.
1315
1316 getQuarkAbsolute() takes a variable amount of Strings in parameter, which
1317 represent the full path to the attribute. Some of them can be constants, some
1318 can come programatically, often from the event's fields.
1319
1320 getQuarkRelative() is to be used when you already know the quark of a certain
1321 attribute, and want to access on of its sub-attributes. Its first parameter is
1322 the origin quark, followed by a String varagrs which represent the relative path
1323 to the final attribute.
1324
1325 These two methods will throw an AttributeNotFoundException if trying to access
1326 an attribute that does not exist in the model.
1327
1328 These methods also imply that the view has the knowledge of how the attribute
1329 tree is organized. This should be a reasonable hypothesis, since the same
1330 analysis plugin will normally ship both the state provider and the view, and
1331 they will have been written by the same person. In other cases, it's possible to
1332 use getSubAttributes() to explore the organization of the attribute tree first.
1333
1334 ===== waitUntilBuilt() =====
1335
1336 This is a simple method used to block the caller until the construction phase of
1337 this state system is done. If the view prefers to wait until all information is
1338 available before starting to do queries (to get all known attributes right away,
1339 for example), this is the guy to call.
1340
1341 ===== queryFullState() =====
1342
1343 This is the method to do full queries. As mentioned earlier, you only need to
1344 pass a target timestamp in parameter. It will return a List of state intervals,
1345 in which the offset corresponds to the attribute quark. This will represent the
1346 complete state of the model at the requested time.
1347
1348 ===== querySingleState() =====
1349
1350 The method to do single queries. You pass in parameter both a timestamp and an
1351 attribute quark. This will return the single state matching this
1352 timestamp/attribute pair.
1353
1354 Other methods are available, you are encouraged to read their Javadoc and see if
1355 they can be potentially useful.
1356
1357 ==== ITmfStateSystemBuilder ====
1358
1359 ITmfStateSystemBuilder is the read-write interface to the state system. It
1360 extends ITmfStateSystem itself, so all its methods are available. It then adds
1361 methods that can be used to write to the state system, either by creating new
1362 attributes of inserting state changes.
1363
1364 It is normally reserved for the state provider and should not be visible to
1365 external components. However it will be available in AbstractTmfStateProvider,
1366 in the field 'ss'. That way you can call ss.modifyAttribute() etc. in your state
1367 provider to write to the state.
1368
1369 The main methods of interest are:
1370
1371 ===== getQuark*AndAdd() =====
1372
1373 getQuarkAbsoluteAndAdd() and getQuarkRelativeAndAdd() work exactly like their
1374 non-AndAdd counterparts in ITmfStateSystem. The difference is that the -AndAdd
1375 versions will not throw any exception: if the requested attribute path does not
1376 exist in the system, it will be created, and its newly-assigned quark will be
1377 returned.
1378
1379 When in a state provider, the -AndAdd version should normally be used (unless
1380 you know for sure the attribute already exist and don't want to create it
1381 otherwise). This means that there is no need to define the whole attribute tree
1382 in advance, the attributes will be created on-demand.
1383
1384 ===== modifyAttribute() =====
1385
1386 This is the main state-change-insertion method. As was explained before, a state
1387 change is defined by a timestamp, an attribute and a state value. Those three
1388 elements need to be passed to modifyAttribute as parameters.
1389
1390 Other state change insertion methods are available (increment-, push-, pop- and
1391 removeAttribute()), but those are simply convenience wrappers around
1392 modifyAttribute(). Check their Javadoc for more information.
1393
1394 ===== closeHistory() =====
1395
1396 When the construction phase is done, do not forget to call closeHistory() to
1397 tell the backend that no more intervals will be received. Depending on the
1398 backend type, it might have to save files, close descriptors, etc. This ensures
1399 that a persitent file can then be re-used when the trace is opened again.
1400
1401 If you use the AbstractTmfStateProvider, it will call closeHistory()
1402 automatically when it reaches the end of the trace.
1403
1404 === Other relevant interfaces ===
1405
1406 ==== o.e.l.tmf.core.statevalue.ITmfStateValue ====
1407
1408 This is the interface used to represent state values. Those are used when
1409 inserting state changes in the provider, and is also part of the state intervals
1410 obtained when doing queries.
1411
1412 The abstract TmfStateValue class contains the factory methods to create new
1413 state values of either int, long, double or string types. To retrieve the real
1414 object inside the state value, one can use the .unbox* methods.
1415
1416 Note: Do not instantiate null values manually, use TmfStateValue.nullValue()
1417
1418 ==== o.e.l.tmf.core.interval.ITmfStateInterval ====
1419
1420 This is the interface to represent the state intervals, which are stored in the
1421 state history backend, and are returned when doing state system queries. A very
1422 simple implementation is available in TmfStateInterval. Its methods should be
1423 self-descriptive.
1424
1425 === Exceptions ===
1426
1427 The following exceptions, found in o.e.l.tmf.core.exceptions, are related to
1428 state system activities.
1429
1430 ==== AttributeNotFoundException ====
1431
1432 This is thrown by getQuarkRelative() and getQuarkAbsolute() (but not byt the
1433 -AndAdd versions!) when passing an attribute path that is not present in the
1434 state system. This is to ensure that no new attribute is created when using
1435 these versions of the methods.
1436
1437 Views can expect some attributes to be present, but they should handle these
1438 exceptions for when the attributes end up not being in the state system (perhaps
1439 this particular trace didn't have a certain type of events, etc.)
1440
1441 ==== StateValueTypeException ====
1442
1443 This exception will be thrown when trying to unbox a state value into a type
1444 different than its own. You should always check with ITmfStateValue#getType()
1445 beforehand if you are not sure about the type of a given state value.
1446
1447 ==== TimeRangeException ====
1448
1449 This exception is thrown when trying to do a query on the state system for a
1450 timestamp that is outside of its range. To be safe, you should check with
1451 ITmfStateSystem#getStartTime() and #getCurrentEndTime() for the current valid
1452 range of the state system. This is especially important when doing queries on
1453 a state system that is currently being built.
1454
1455 ==== StateSystemDisposedException ====
1456
1457 This exception is thrown when trying to access a state system that has been
1458 disposed, with its dispose() method. This can potentially happen at shutdown,
1459 since Eclipse is not always consistent with the order in which the components
1460 are closed.
1461
1462
1463 == Comparison of state system backends ==
1464
1465 As we have seen in section [[#High-level components]], the state system needs
1466 a storage backend to save the intervals. Different implementations are
1467 available when building your state system from TmfStateSystemFactory.
1468
1469 Do not confuse full/single queries with full/partial history! All backend types
1470 should be able to handle any type of queries defined in the ITmfStateSystem API,
1471 unless noted otherwise.
1472
1473 === Full history ===
1474
1475 Available with TmfStateSystemFactory#newFullHistory(). The full history uses a
1476 History Tree data structure, which is an optimized structure store state
1477 intervals on disk. Once built, it can respond to queries in a ''log(n)'' manner.
1478
1479 You need to specify a file at creation time, which will be the container for
1480 the history tree. Once it's completely built, it will remain on disk (until you
1481 delete the trace from the project). This way it can be reused from one session
1482 to another, which makes subsequent loading time much faster.
1483
1484 This the backend used by the LTTng kernel plugin. It offers good scalability and
1485 performance, even at extreme sizes (it's been tested with traces of sizes up to
1486 500 GB). Its main downside is the amount of disk space required: since every
1487 single interval is written to disk, the size of the history file can quite
1488 easily reach and even surpass the size of the trace itself.
1489
1490 === Null history ===
1491
1492 Available with TmfStateSystemFactory#newNullHistory(). As its name implies the
1493 null history is in fact an absence of state history. All its query methods will
1494 return null (see the Javadoc in NullBackend).
1495
1496 Obviously, no file is required, and almost no memory space is used.
1497
1498 It's meant to be used in cases where you are not interested in past states, but
1499 only in the "ongoing" one. It can also be useful for debugging and benchmarking.
1500
1501 === In-memory history ===
1502
1503 Available with TmfStateSystemFactory#newInMemHistory(). This is a simple wrapper
1504 using a TreeSet to store all state intervals in memory. The implementation at
1505 the moment is quite simple, it will perform a binary search on entries when
1506 doing queries to find the ones that match.
1507
1508 The advantage of this method is that it's very quick to build and query, since
1509 all the information resides in memory. However, you are limited to 2^31 entries
1510 (roughly 2 billions), and depending on your state provider and trace type, that
1511 can happen really fast!
1512
1513 There are no safeguards, so if you bust the limit you will end up with
1514 ArrayOutOfBoundsException's everywhere. If your trace or state history can be
1515 arbitrarily big, it's probably safer to use a Full History instead.
1516
1517 === Partial history ===
1518
1519 Available with TmfStateSystemFactory#newPartialHistory(). The partial history is
1520 a more advanced form of the full history. Instead of writing all state intervals
1521 to disk like with the full history, we only write a small fraction of them, and
1522 go back to read the trace to recreate the states in-between.
1523
1524 It has a big advantage over a full history in terms of disk space usage. It's
1525 very possible to reduce the history tree file size by a factor of 1000, while
1526 keeping query times within a factor of two. Its main downside comes from the
1527 fact that you cannot do efficient single queries with it (they are implemented
1528 by doing full queries underneath).
1529
1530 This makes it a poor choice for views like the Control Flow view, where you do
1531 a lot of range queries and single queries. However, it is a perfect fit for
1532 cases like statistics, where you usually do full queries already, and you store
1533 lots of small states which are very easy to "compress".
1534
1535 However, it can't really be used until bug 409630 is fixed.
1536
1537 == State System Operations ==
1538
1539 TmfStateSystemOperations is a static class that implements additional
1540 statistical operations that can be performed on attributes of the state system.
1541
1542 These operations require that the attribute be one of the numerical values
1543 (int, long or double).
1544
1545 The speed of these operations can be greatly improved for large data sets if
1546 the attribute was inserted in the state system as a mipmap attribute. Refer to
1547 the [[#Mipmap feature | Mipmap feature]] section.
1548
1549 ===== queryRangeMax() =====
1550
1551 This method returns the maximum numerical value of an attribute in the
1552 specified time range. The attribute must be of type int, long or double.
1553 Null values are ignored. The returned value will be of the same state value
1554 type as the base attribute, or a null value if there is no state interval
1555 stored in the given time range.
1556
1557 ===== queryRangeMin() =====
1558
1559 This method returns the minimum numerical value of an attribute in the
1560 specified time range. The attribute must be of type int, long or double.
1561 Null values are ignored. The returned value will be of the same state value
1562 type as the base attribute, or a null value if there is no state interval
1563 stored in the given time range.
1564
1565 ===== queryRangeAverage() =====
1566
1567 This method returns the average numerical value of an attribute in the
1568 specified time range. The attribute must be of type int, long or double.
1569 Each state interval value is weighted according to time. Null values are
1570 counted as zero. The returned value will be a double primitive, which will
1571 be zero if there is no state interval stored in the given time range.
1572
1573 == Code example ==
1574
1575 Here is a small example of code that will use the state system. For this
1576 example, let's assume we want to track the state of all the CPUs in a LTTng
1577 kernel trace. To do so, we will watch for the "sched_switch" event in the state
1578 provider, and will update an attribute indicating if the associated CPU should
1579 be set to "running" or "idle".
1580
1581 We will use an attribute tree that looks like this:
1582 <pre>
1583 CPUs
1584 |--0
1585 | |--Status
1586 |
1587 |--1
1588 | |--Status
1589 |
1590 | 2
1591 | |--Status
1592 ...
1593 </pre>
1594
1595 The second-level attributes will be named from the information available in the
1596 trace events. Only the "Status" attributes will carry a state value (this means
1597 we could have just used "1", "2", "3",... directly, but we'll do it in a tree
1598 for the example's sake).
1599
1600 Also, we will use integer state values to represent "running" or "idle", instead
1601 of saving the strings that would get repeated every time. This will help in
1602 reducing the size of the history file.
1603
1604 First we will define a state provider in MyStateProvider. Then, assuming we
1605 have already implemented a custom trace type extending CtfTmfTrace, we will add
1606 a section to it to make it build a state system using the provider we defined
1607 earlier. Finally, we will show some example code that can query the state
1608 system, which would normally go in a view or analysis module.
1609
1610 === State Provider ===
1611
1612 <pre>
1613 import org.eclipse.linuxtools.tmf.core.ctfadaptor.CtfTmfEvent;
1614 import org.eclipse.linuxtools.tmf.core.event.ITmfEvent;
1615 import org.eclipse.linuxtools.tmf.core.exceptions.AttributeNotFoundException;
1616 import org.eclipse.linuxtools.tmf.core.exceptions.StateValueTypeException;
1617 import org.eclipse.linuxtools.tmf.core.exceptions.TimeRangeException;
1618 import org.eclipse.linuxtools.tmf.core.statesystem.AbstractTmfStateProvider;
1619 import org.eclipse.linuxtools.tmf.core.statevalue.ITmfStateValue;
1620 import org.eclipse.linuxtools.tmf.core.statevalue.TmfStateValue;
1621 import org.eclipse.linuxtools.tmf.core.trace.ITmfTrace;
1622
1623 /**
1624 * Example state system provider.
1625 *
1626 * @author Alexandre Montplaisir
1627 */
1628 public class MyStateProvider extends AbstractTmfStateProvider {
1629
1630 /** State value representing the idle state */
1631 public static ITmfStateValue IDLE = TmfStateValue.newValueInt(0);
1632
1633 /** State value representing the running state */
1634 public static ITmfStateValue RUNNING = TmfStateValue.newValueInt(1);
1635
1636 /**
1637 * Constructor
1638 *
1639 * @param trace
1640 * The trace to which this state provider is associated
1641 */
1642 public MyStateProvider(ITmfTrace trace) {
1643 super(trace, CtfTmfEvent.class, "Example"); //$NON-NLS-1$
1644 /*
1645 * The third parameter here is not important, it's only used to name a
1646 * thread internally.
1647 */
1648 }
1649
1650 @Override
1651 public int getVersion() {
1652 /*
1653 * If the version of an existing file doesn't match the version supplied
1654 * in the provider, a rebuild of the history will be forced.
1655 */
1656 return 1;
1657 }
1658
1659 @Override
1660 public MyStateProvider getNewInstance() {
1661 return new MyStateProvider(getTrace());
1662 }
1663
1664 @Override
1665 protected void eventHandle(ITmfEvent ev) {
1666 /*
1667 * AbstractStateChangeInput should have already checked for the correct
1668 * class type.
1669 */
1670 CtfTmfEvent event = (CtfTmfEvent) ev;
1671
1672 final long ts = event.getTimestamp().getValue();
1673 Integer nextTid = ((Long) event.getContent().getField("next_tid").getValue()).intValue();
1674
1675 try {
1676
1677 if (event.getEventName().equals("sched_switch")) {
1678 int quark = ss.getQuarkAbsoluteAndAdd("CPUs", String.valueOf(event.getCPU()), "Status");
1679 ITmfStateValue value;
1680 if (nextTid > 0) {
1681 value = RUNNING;
1682 } else {
1683 value = IDLE;
1684 }
1685 ss.modifyAttribute(ts, value, quark);
1686 }
1687
1688 } catch (TimeRangeException e) {
1689 /*
1690 * This should not happen, since the timestamp comes from a trace
1691 * event.
1692 */
1693 throw new IllegalStateException(e);
1694 } catch (AttributeNotFoundException e) {
1695 /*
1696 * This should not happen either, since we're only accessing a quark
1697 * we just created.
1698 */
1699 throw new IllegalStateException(e);
1700 } catch (StateValueTypeException e) {
1701 /*
1702 * This wouldn't happen here, but could potentially happen if we try
1703 * to insert mismatching state value types in the same attribute.
1704 */
1705 e.printStackTrace();
1706 }
1707
1708 }
1709
1710 }
1711 </pre>
1712
1713 === Trace type definition ===
1714
1715 <pre>
1716 import java.io.File;
1717
1718 import org.eclipse.core.resources.IProject;
1719 import org.eclipse.core.runtime.IStatus;
1720 import org.eclipse.core.runtime.Status;
1721 import org.eclipse.linuxtools.tmf.core.ctfadaptor.CtfTmfTrace;
1722 import org.eclipse.linuxtools.tmf.core.exceptions.TmfTraceException;
1723 import org.eclipse.linuxtools.tmf.core.statesystem.ITmfStateProvider;
1724 import org.eclipse.linuxtools.tmf.core.statesystem.ITmfStateSystem;
1725 import org.eclipse.linuxtools.tmf.core.statesystem.TmfStateSystemFactory;
1726 import org.eclipse.linuxtools.tmf.core.trace.TmfTraceManager;
1727
1728 /**
1729 * Example of a custom trace type using a custom state provider.
1730 *
1731 * @author Alexandre Montplaisir
1732 */
1733 public class MyTraceType extends CtfTmfTrace {
1734
1735 /** The file name of the history file */
1736 public final static String HISTORY_FILE_NAME = "mystatefile.ht";
1737
1738 /** ID of the state system we will build */
1739 public static final String STATE_ID = "org.eclipse.linuxtools.lttng2.example";
1740
1741 /**
1742 * Default constructor
1743 */
1744 public MyTraceType() {
1745 super();
1746 }
1747
1748 @Override
1749 public IStatus validate(final IProject project, final String path) {
1750 /*
1751 * Add additional validation code here, and return a IStatus.ERROR if
1752 * validation fails.
1753 */
1754 return Status.OK_STATUS;
1755 }
1756
1757 @Override
1758 protected void buildStateSystem() throws TmfTraceException {
1759 super.buildStateSystem();
1760
1761 /* Build the custom state system for this trace */
1762 String directory = TmfTraceManager.getSupplementaryFileDir(this);
1763 final File htFile = new File(directory + HISTORY_FILE_NAME);
1764 final ITmfStateProvider htInput = new MyStateProvider(this);
1765
1766 ITmfStateSystem ss = TmfStateSystemFactory.newFullHistory(htFile, htInput, false);
1767 fStateSystems.put(STATE_ID, ss);
1768 }
1769
1770 }
1771 </pre>
1772
1773 === Query code ===
1774
1775 <pre>
1776 import java.util.List;
1777
1778 import org.eclipse.linuxtools.tmf.core.exceptions.AttributeNotFoundException;
1779 import org.eclipse.linuxtools.tmf.core.exceptions.StateSystemDisposedException;
1780 import org.eclipse.linuxtools.tmf.core.exceptions.TimeRangeException;
1781 import org.eclipse.linuxtools.tmf.core.interval.ITmfStateInterval;
1782 import org.eclipse.linuxtools.tmf.core.statesystem.ITmfStateSystem;
1783 import org.eclipse.linuxtools.tmf.core.statevalue.ITmfStateValue;
1784 import org.eclipse.linuxtools.tmf.core.trace.ITmfTrace;
1785
1786 /**
1787 * Class showing examples of state system queries.
1788 *
1789 * @author Alexandre Montplaisir
1790 */
1791 public class QueryExample {
1792
1793 private final ITmfStateSystem ss;
1794
1795 /**
1796 * Constructor
1797 *
1798 * @param trace
1799 * Trace that this "view" will display.
1800 */
1801 public QueryExample(ITmfTrace trace) {
1802 ss = trace.getStateSystems().get(MyTraceType.STATE_ID);
1803 }
1804
1805 /**
1806 * Example method of querying one attribute in the state system.
1807 *
1808 * We pass it a cpu and a timestamp, and it returns us if that cpu was
1809 * executing a process (true/false) at that time.
1810 *
1811 * @param cpu
1812 * The CPU to check
1813 * @param timestamp
1814 * The timestamp of the query
1815 * @return True if the CPU was running, false otherwise
1816 */
1817 public boolean cpuIsRunning(int cpu, long timestamp) {
1818 try {
1819 int quark = ss.getQuarkAbsolute("CPUs", String.valueOf(cpu), "Status");
1820 ITmfStateValue value = ss.querySingleState(timestamp, quark).getStateValue();
1821
1822 if (value.equals(MyStateProvider.RUNNING)) {
1823 return true;
1824 }
1825
1826 /*
1827 * Since at this level we have no guarantee on the contents of the state
1828 * system, it's important to handle these cases correctly.
1829 */
1830 } catch (AttributeNotFoundException e) {
1831 /*
1832 * Handle the case where the attribute does not exist in the state
1833 * system (no CPU with this number, etc.)
1834 */
1835 ...
1836 } catch (TimeRangeException e) {
1837 /*
1838 * Handle the case where 'timestamp' is outside of the range of the
1839 * history.
1840 */
1841 ...
1842 } catch (StateSystemDisposedException e) {
1843 /*
1844 * Handle the case where the state system is being disposed. If this
1845 * happens, it's normally when shutting down, so the view can just
1846 * return immediately and wait it out.
1847 */
1848 }
1849 return false;
1850 }
1851
1852
1853 /**
1854 * Example method of using a full query.
1855 *
1856 * We pass it a timestamp, and it returns us how many CPUs were executing a
1857 * process at that moment.
1858 *
1859 * @param timestamp
1860 * The target timestamp
1861 * @return The amount of CPUs that were running at that time
1862 */
1863 public int getNbRunningCpus(long timestamp) {
1864 int count = 0;
1865
1866 try {
1867 /* Get the list of the quarks we are interested in. */
1868 List<Integer> quarks = ss.getQuarks("CPUs", "*", "Status");
1869
1870 /*
1871 * Get the full state at our target timestamp (it's better than
1872 * doing an arbitrary number of single queries).
1873 */
1874 List<ITmfStateInterval> state = ss.queryFullState(timestamp);
1875
1876 /* Look at the value of the state for each quark */
1877 for (Integer quark : quarks) {
1878 ITmfStateValue value = state.get(quark).getStateValue();
1879 if (value.equals(MyStateProvider.RUNNING)) {
1880 count++;
1881 }
1882 }
1883
1884 } catch (TimeRangeException e) {
1885 /*
1886 * Handle the case where 'timestamp' is outside of the range of the
1887 * history.
1888 */
1889 ...
1890 } catch (StateSystemDisposedException e) {
1891 /* Handle the case where the state system is being disposed. */
1892 ...
1893 }
1894 return count;
1895 }
1896 }
1897 </pre>
1898
1899 == Mipmap feature ==
1900
1901 The mipmap feature allows attributes to be inserted into the state system with
1902 additional computations performed to automatically store sub-attributes that
1903 can later be used for statistical operations. The mipmap has a resolution which
1904 represents the number of state attribute changes that are used to compute the
1905 value at the next mipmap level.
1906
1907 The supported mipmap features are: max, min, and average. Each one of these
1908 features requires that the base attribute be a numerical state value (int, long
1909 or double). An attribute can be mipmapped for one or more of the features at
1910 the same time.
1911
1912 To use a mipmapped attribute in queries, call the corresponding methods of the
1913 static class [[#State System Operations | TmfStateSystemOperations]].
1914
1915 === AbstractTmfMipmapStateProvider ===
1916
1917 AbstractTmfMipmapStateProvider is an abstract provider class that allows adding
1918 features to a specific attribute into a mipmap tree. It extends AbstractTmfStateProvider.
1919
1920 If a provider wants to add mipmapped attributes to its tree, it must extend
1921 AbstractTmfMipmapStateProvider and call modifyMipmapAttribute() in the event
1922 handler, specifying one or more mipmap features to compute. Then the structure
1923 of the attribute tree will be :
1924
1925 <pre>
1926 |- <attribute>
1927 | |- <mipmapFeature> (min/max/avg)
1928 | | |- 1
1929 | | |- 2
1930 | | |- 3
1931 | | ...
1932 | | |- n (maximum mipmap level)
1933 | |- <mipmapFeature> (min/max/avg)
1934 | | |- 1
1935 | | |- 2
1936 | | |- 3
1937 | | ...
1938 | | |- n (maximum mipmap level)
1939 | ...
1940 </pre>
1941
1942 = UML2 Sequence Diagram Framework =
1943
1944 The purpose of the UML2 Sequence Diagram Framework of TMF is to provide a framework for generation of UML2 sequence diagrams. It provides
1945 *UML2 Sequence diagram drawing capabilities (i.e. lifelines, messages, activations, object creation and deletion)
1946 *a generic, re-usable Sequence Diagram View
1947 *Eclipse Extension Point for the creation of sequence diagrams
1948 *callback hooks for searching and filtering within the Sequence Diagram View
1949 *scalability<br>
1950 The following chapters describe the Sequence Diagram Framework as well as a reference implementation and its usage.
1951
1952 == TMF UML2 Sequence Diagram Extensions ==
1953
1954 In the UML2 Sequence Diagram Framework an Eclipse extension point is defined so that other plug-ins can contribute code to create sequence diagram.
1955
1956 '''Identifier''': org.eclipse.linuxtools.tmf.ui.uml2SDLoader<br>
1957 '''Since''': 1.0<br>
1958 '''Description''': This extension point aims to list and connect any UML2 Sequence Diagram loader.<br>
1959 '''Configuration Markup''':<br>
1960
1961 <pre>
1962 <!ELEMENT extension (uml2SDLoader)+>
1963 <!ATTLIST extension
1964 point CDATA #REQUIRED
1965 id CDATA #IMPLIED
1966 name CDATA #IMPLIED
1967 >
1968 </pre>
1969
1970 *point - A fully qualified identifier of the target extension point.
1971 *id - An optional identifier of the extension instance.
1972 *name - An optional name of the extension instance.
1973
1974 <pre>
1975 <!ELEMENT uml2SDLoader EMPTY>
1976 <!ATTLIST uml2SDLoader
1977 id CDATA #REQUIRED
1978 name CDATA #REQUIRED
1979 class CDATA #REQUIRED
1980 view CDATA #REQUIRED
1981 default (true | false)
1982 </pre>
1983
1984 *id - A unique identifier for this uml2SDLoader. This is not mandatory as long as the id attribute cannot be retrieved by the provider plug-in. The class attribute is the one on which the underlying algorithm relies.
1985 *name - An name of the extension instance.
1986 *class - The implementation of this UML2 SD viewer loader. The class must implement org.eclipse.linuxtools.tmf.ui.views.uml2sd.load.IUml2SDLoader.
1987 *view - The view ID of the view that this loader aims to populate. Either org.eclipse.linuxtools.tmf.ui.views.uml2sd.SDView itself or a extension of org.eclipse.linuxtools.tmf.ui.views.uml2sd.SDView.
1988 *default - Set to true to make this loader the default one for the view; in case of several default loaders, first one coming from extensions list is taken.
1989
1990
1991 == Management of the Extension Point ==
1992
1993 The TMF UI plug-in is responsible for evaluating each contribution to the extension point.
1994 <br>
1995 <br>
1996 With this extension point, a loader class is associated with a Sequence Diagram View. Multiple loaders can be associated to a single Sequence Diagram View. However, additional means have to be implemented to specify which loader should be used when opening the view. For example, an eclipse action or command could be used for that. This additional code is not necessary if there is only one loader for a given Sequence Diagram View associated and this loader has the attribute "default" set to "true". (see also [[#Using one Sequence Diagram View with Multiple Loaders | Using one Sequence Diagram View with Multiple Loaders]])
1997
1998 == Sequence Diagram View ==
1999
2000 For this extension point a Sequence Diagram View has to be defined as well. The Sequence Diagram View class implementation is provided by the plug-in ''org.eclipse.linuxtools.tmf.ui'' (''org.eclipse.linuxtools.tmf.ui.views.uml2sd.SDView'') and can be used as is or can also be sub-classed. For that, a view extension has to be added to the ''plugin.xml''.
2001
2002 === Supported Widgets ===
2003
2004 The loader class provides a frame containing all the UML2 widgets to be displayed. The following widgets exist:
2005
2006 *Lifeline
2007 *Activation
2008 *Synchronous Message
2009 *Asynchronous Message
2010 *Synchronous Message Return
2011 *Asynchronous Message Return
2012 *Stop
2013
2014 For a lifeline, a category can be defined. The lifeline category defines icons, which are displayed in the lifeline header.
2015
2016 === Zooming ===
2017
2018 The Sequence Diagram View allows the user to zoom in, zoom out and reset the zoom factor.
2019
2020 === Printing ===
2021
2022 It is possible to print the whole sequence diagram as well as part of it.
2023
2024 === Key Bindings ===
2025
2026 *SHIFT+ALT+ARROW-DOWN - to scroll down within sequence diagram one view page at a time
2027 *SHIFT+ALT+ARROW-UP - to scroll up within sequence diagram one view page at a time
2028 *SHIFT+ALT+ARROW-RIGHT - to scroll right within sequence diagram one view page at a time
2029 *SHIFT+ALT+ARROW-LEFT - to scroll left within sequence diagram one view page at a time
2030 *SHIFT+ALT+ARROW-HOME - to jump to the beginning of the selected message if not already visible in page
2031 *SHIFT+ALT+ARROW-END - to jump to the end of the selected message if not already visible in page
2032 *CTRL+F - to open find dialog if either the basic or extended find provider is defined (see [[#Using the Find Provider Interface | Using the Find Provider Interface]])
2033 *CTRL+P - to open print dialog
2034
2035 === Preferences ===
2036
2037 The UML2 Sequence Diagram Framework provides preferences to customize the appearance of the Sequence Diagram View. The color of all widgets and text as well as the fonts of the text of all widget can be adjust. Amongst others the default lifeline width can be alternated. To change preferences select '''Windows->Preferences->Tracing->UML2 Sequence Diagrams'''. The following preference page will show:<br>
2038 [[Image:images/SeqDiagramPref.png]] <br>
2039 After changing the preferences select '''OK'''.
2040
2041 === Callback hooks ===
2042
2043 The Sequence Diagram View provides several callback hooks so that extension can provide application specific functionality. The following interfaces can be provided:
2044 * Basic find provider or extended find Provider<br> For finding within the sequence diagram
2045 * Basic filter provider and extended Filter Provider<br> For filtering within the sequnce diagram.
2046 * Basic paging provider or advanced paging provider<br> For scalability reasons, used to limit number of displayed messages
2047 * Properies provider<br> To provide properties of selected elements
2048 * Collapse provider <br> To collapse areas of the sequence diagram
2049
2050 == Tutorial ==
2051
2052 This tutorial describes how to create a UML2 Sequence Diagram Loader extension and use this loader in the in Eclipse.
2053
2054 === Prerequisites ===
2055
2056 The tutorial is based on Eclipse 4.4 (Eclipse Luna) and TMF 3.0.0.
2057
2058 === Creating an Eclipse UI Plug-in ===
2059
2060 To create a new project with name org.eclipse.linuxtools.tmf.sample.ui select '''File -> New -> Project -> Plug-in Development -> Plug-in Project'''. <br>
2061 [[Image:images/Screenshot-NewPlug-inProject1.png]]<br>
2062
2063 [[Image:images/Screenshot-NewPlug-inProject2.png]]<br>
2064
2065 [[Image:images/Screenshot-NewPlug-inProject3.png]]<br>
2066
2067 === Creating a Sequence Diagram View ===
2068
2069 To open the plug-in manifest, double-click on the MANIFEST.MF file. <br>
2070 [[Image:images/SelectManifest.png]]<br>
2071
2072 Change to the Dependencies tab and select '''Add...''' of the ''Required Plug-ins'' section. A new dialog box will open. Next find plug-ins ''org.eclipse.linuxtools.tmf.ui'' and ''org.eclipse.linuxtools.tmf.core'' and then press '''OK'''<br>
2073 [[Image:images/AddDependencyTmfUi.png]]<br>
2074
2075 Change to the Extensions tab and select '''Add...''' of the ''All Extension'' section. A new dialog box will open. Find the view extension ''org.eclipse.ui.views'' and press '''Finish'''.<br>
2076 [[Image:images/AddViewExtension1.png]]<br>
2077
2078 To create a Sequence Diagram View, click the right mouse button. Then select '''New -> view'''<br>
2079 [[Image:images/AddViewExtension2.png]]<br>
2080
2081 A new view entry has been created. Fill in the fields ''id'', ''name'' and ''class''. Note that for ''class'' the SD view implementation (''org.eclipse.linuxtools.tmf.ui.views.SDView'') of the TMF UI plug-in is used.<br>
2082 [[Image:images/FillSampleSeqDiagram.png]]<br>
2083
2084 The view is prepared. Run the Example. To launch the an Eclipse Application select the ''Overview'' tab and click on '''Launch an Eclipse Application'''<br>
2085 [[Image:images/RunEclipseApplication.png]]<br>
2086
2087 A new Eclipse application window will show. In the new window go to '''Windows -> Show View -> Other... -> Other -> Sample Sequence Diagram'''.<br>
2088 [[Image:images/ShowViewOther.png]]<br>
2089
2090 The Sequence Diagram View will open with an blank page.<br>
2091 [[Image:images/BlankSampleSeqDiagram.png]]<br>
2092
2093 Close the Example Application.
2094
2095 === Defining the uml2SDLoader Extension ===
2096
2097 After defining the Sequence Diagram View it's time to create the ''uml2SDLoader'' Extension. <br>
2098
2099 Before doing that add a dependency to TMF. For that select '''Add...''' of the ''Required Plug-ins'' section. A new dialog box will open. Next find plug-in ''org.eclipse.linuxtools.tmf'' and press '''OK'''<br>
2100 [[Image:images/AddDependencyTmf.png]]<br>
2101
2102 To create the loader extension, change to the Extensions tab and select '''Add...''' of the ''All Extension'' section. A new dialog box will open. Find the extension ''org.eclipse.linuxtools.tmf.ui.uml2SDLoader'' and press '''Finish'''.<br>
2103 [[Image:images/AddTmfUml2SDLoader.png]]<br>
2104
2105 A new 'uml2SDLoader'' extension has been created. Fill in fields ''id'', ''name'', ''class'', ''view'' and ''default''. Use ''default'' equal true for this example. For the view add the id of the Sequence Diagram View of chapter [[#Creating a Sequence Diagram View | Creating a Sequence Diagram View]]. <br>
2106 [[Image:images/FillSampleLoader.png]]<br>
2107
2108 Then click on ''class'' (see above) to open the new class dialog box. Fill in the relevant fields and select '''Finish'''. <br>
2109 [[Image:images/NewSampleLoaderClass.png]]<br>
2110
2111 A new Java class will be created which implements the interface ''org.eclipse.linuxtools.tmf.ui.views.uml2sd.load.IUml2SDLoader''.<br>
2112
2113 <pre>
2114 package org.eclipse.linuxtools.tmf.sample.ui;
2115
2116 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.SDView;
2117 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.load.IUml2SDLoader;
2118
2119 public class SampleLoader implements IUml2SDLoader {
2120
2121 public SampleLoader() {
2122 // TODO Auto-generated constructor stub
2123 }
2124
2125 @Override
2126 public void dispose() {
2127 // TODO Auto-generated method stub
2128
2129 }
2130
2131 @Override
2132 public String getTitleString() {
2133 // TODO Auto-generated method stub
2134 return null;
2135 }
2136
2137 @Override
2138 public void setViewer(SDView arg0) {
2139 // TODO Auto-generated method stub
2140
2141 }
2142 </pre>
2143
2144 === Implementing the Loader Class ===
2145
2146 Next is to implement the methods of the IUml2SDLoader interface method. The following code snippet shows how to create the major sequence diagram elements. Please note that no time information is stored.<br>
2147
2148 <pre>
2149 package org.eclipse.linuxtools.tmf.sample.ui;
2150
2151 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.SDView;
2152 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.AsyncMessage;
2153 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.AsyncMessageReturn;
2154 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.ExecutionOccurrence;
2155 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.Frame;
2156 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.Lifeline;
2157 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.Stop;
2158 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.SyncMessage;
2159 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.SyncMessageReturn;
2160 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.load.IUml2SDLoader;
2161
2162 public class SampleLoader implements IUml2SDLoader {
2163
2164 private SDView fSdView;
2165
2166 public SampleLoader() {
2167 }
2168
2169 @Override
2170 public void dispose() {
2171 }
2172
2173 @Override
2174 public String getTitleString() {
2175 return "Sample Diagram";
2176 }
2177
2178 @Override
2179 public void setViewer(SDView arg0) {
2180 fSdView = arg0;
2181 createFrame();
2182 }
2183
2184 private void createFrame() {
2185
2186 Frame testFrame = new Frame();
2187 testFrame.setName("Sample Frame");
2188
2189 /*
2190 * Create lifelines
2191 */
2192
2193 Lifeline lifeLine1 = new Lifeline();
2194 lifeLine1.setName("Object1");
2195 testFrame.addLifeLine(lifeLine1);
2196
2197 Lifeline lifeLine2 = new Lifeline();
2198 lifeLine2.setName("Object2");
2199 testFrame.addLifeLine(lifeLine2);
2200
2201
2202 /*
2203 * Create Sync Message
2204 */
2205 // Get new occurrence on lifelines
2206 lifeLine1.getNewEventOccurrence();
2207
2208 // Get Sync message instances
2209 SyncMessage start = new SyncMessage();
2210 start.setName("Start");
2211 start.setEndLifeline(lifeLine1);
2212 testFrame.addMessage(start);
2213
2214 /*
2215 * Create Sync Message
2216 */
2217 // Get new occurrence on lifelines
2218 lifeLine1.getNewEventOccurrence();
2219 lifeLine2.getNewEventOccurrence();
2220
2221 // Get Sync message instances
2222 SyncMessage syn1 = new SyncMessage();
2223 syn1.setName("Sync Message 1");
2224 syn1.setStartLifeline(lifeLine1);
2225 syn1.setEndLifeline(lifeLine2);
2226 testFrame.addMessage(syn1);
2227
2228 /*
2229 * Create corresponding Sync Message Return
2230 */
2231
2232 // Get new occurrence on lifelines
2233 lifeLine1.getNewEventOccurrence();
2234 lifeLine2.getNewEventOccurrence();
2235
2236 SyncMessageReturn synReturn1 = new SyncMessageReturn();
2237 synReturn1.setName("Sync Message Return 1");
2238 synReturn1.setStartLifeline(lifeLine2);
2239 synReturn1.setEndLifeline(lifeLine1);
2240 synReturn1.setMessage(syn1);
2241 testFrame.addMessage(synReturn1);
2242
2243 /*
2244 * Create Activations (Execution Occurrence)
2245 */
2246 ExecutionOccurrence occ1 = new ExecutionOccurrence();
2247 occ1.setStartOccurrence(start.getEventOccurrence());
2248 occ1.setEndOccurrence(synReturn1.getEventOccurrence());
2249 lifeLine1.addExecution(occ1);
2250 occ1.setName("Activation 1");
2251
2252 ExecutionOccurrence occ2 = new ExecutionOccurrence();
2253 occ2.setStartOccurrence(syn1.getEventOccurrence());
2254 occ2.setEndOccurrence(synReturn1.getEventOccurrence());
2255 lifeLine2.addExecution(occ2);
2256 occ2.setName("Activation 2");
2257
2258 /*
2259 * Create Sync Message
2260 */
2261 // Get new occurrence on lifelines
2262 lifeLine1.getNewEventOccurrence();
2263 lifeLine2.getNewEventOccurrence();
2264
2265 // Get Sync message instances
2266 AsyncMessage asyn1 = new AsyncMessage();
2267 asyn1.setName("Async Message 1");
2268 asyn1.setStartLifeline(lifeLine1);
2269 asyn1.setEndLifeline(lifeLine2);
2270 testFrame.addMessage(asyn1);
2271
2272 /*
2273 * Create corresponding Sync Message Return
2274 */
2275
2276 // Get new occurrence on lifelines
2277 lifeLine1.getNewEventOccurrence();
2278 lifeLine2.getNewEventOccurrence();
2279
2280 AsyncMessageReturn asynReturn1 = new AsyncMessageReturn();
2281 asynReturn1.setName("Async Message Return 1");
2282 asynReturn1.setStartLifeline(lifeLine2);
2283 asynReturn1.setEndLifeline(lifeLine1);
2284 asynReturn1.setMessage(asyn1);
2285 testFrame.addMessage(asynReturn1);
2286
2287 /*
2288 * Create a note
2289 */
2290
2291 // Get new occurrence on lifelines
2292 lifeLine1.getNewEventOccurrence();
2293
2294 EllipsisMessage info = new EllipsisMessage();
2295 info.setName("Object deletion");
2296 info.setStartLifeline(lifeLine2);
2297 testFrame.addNode(info);
2298
2299 /*
2300 * Create a Stop
2301 */
2302 Stop stop = new Stop();
2303 stop.setLifeline(lifeLine2);
2304 stop.setEventOccurrence(lifeLine2.getNewEventOccurrence());
2305 lifeLine2.addNode(stop);
2306
2307 fSdView.setFrame(testFrame);
2308 }
2309 }
2310 </pre>
2311
2312 Now it's time to run the example application. To launch the Example Application select the ''Overview'' tab and click on '''Launch an Eclipse Application'''<br>
2313 [[Image:images/SampleDiagram1.png]] <br>
2314
2315 === Adding time information ===
2316
2317 To add time information in sequence diagram the timestamp has to be set for each message. The sequence diagram framework uses the ''TmfTimestamp'' class of plug-in ''org.eclipse.linuxtools.tmf.core''. Use ''setTime()'' on each message ''SyncMessage'' since start and end time are the same. For each ''AsyncMessage'' set start and end time separately by using methods ''setStartTime'' and ''setEndTime''. For example: <br>
2318
2319 <pre>
2320 private void createFrame() {
2321 //...
2322 start.setTime(new TmfTimestamp(1000, -3));
2323 syn1.setTime(new TmfTimestamp(1005, -3));
2324 synReturn1.setTime(new TmfTimestamp(1050, -3));
2325 asyn1.setStartTime(new TmfTimestamp(1060, -3));
2326 asyn1.setEndTime(new TmfTimestamp(1070, -3));
2327 asynReturn1.setStartTime(new TmfTimestamp(1060, -3));
2328 asynReturn1.setEndTime(new TmfTimestamp(1070, -3));
2329 //...
2330 }
2331 </pre>
2332
2333 When running the example application, a time compression bar on the left appears which indicates the time elapsed between consecutive events. The time compression scale shows where the time falls between the minimum and maximum delta times. The intensity of the color is used to indicate the length of time, namely, the deeper the intensity, the higher the delta time. The minimum and maximum delta times are configurable through the collbar menu ''Configure Min Max''. The time compression bar and scale may provide an indication about which events consumes the most time. By hovering over the time compression bar a tooltip appears containing more information. <br>
2334
2335 [[Image:images/SampleDiagramTimeComp.png]] <br>
2336
2337 By hovering over a message it will show the time information in the appearing tooltip. For each ''SyncMessage'' it shows its time occurrence and for each ''AsyncMessage'' it shows the start and end time.
2338
2339 [[Image:images/SampleDiagramSyncMessage.png]] <br>
2340 [[Image:images/SampleDiagramAsyncMessage.png]] <br>
2341
2342 To see the time elapsed between 2 messages, select one message and hover over a second message. A tooltip will show with the delta in time. Note if the second message is before the first then a negative delta is displayed. Note that for ''AsyncMessage'' the end time is used for the delta calculation.<br>
2343 [[Image:images/SampleDiagramMessageDelta.png]] <br>
2344
2345 === Default Coolbar and Menu Items ===
2346
2347 The Sequence Diagram View comes with default coolbar and menu items. By default, each sequence diagram shows the following actions:
2348 * Zoom in
2349 * Zoom out
2350 * Reset Zoom Factor
2351 * Selection
2352 * Configure Min Max (drop-down menu only)
2353 * Navigation -> Show the node end (drop-down menu only)
2354 * Navigation -> Show the node start (drop-down menu only)
2355
2356 [[Image:images/DefaultCoolbarMenu.png]]<br>
2357
2358 === Implementing Optional Callbacks ===
2359
2360 The following chapters describe how to use all supported provider interfaces.
2361
2362 ==== Using the Paging Provider Interface ====
2363
2364 For scalability reasons, the paging provider interfaces exists to limit the number of messages displayed in the Sequence Diagram View at a time. For that, two interfaces exist, the basic paging provider and the advanced paging provider. When using the basic paging interface, actions for traversing page by page through the sequence diagram of a trace will be provided.
2365 <br>
2366 To use the basic paging provider, first the interface methods of the ''ISDPagingProvider'' have to be implemented by a class. (i.e. ''hasNextPage()'', ''hasPrevPage()'', ''nextPage()'', ''prevPage()'', ''firstPage()'' and ''endPage()''. Typically, this is implemented in the loader class. Secondly, the provider has to be set in the Sequence Diagram View. This will be done in the ''setViewer()'' method of the loader class. Lastly, the paging provider has to be removed from the view, when the ''dispose()'' method of the loader class is called.
2367
2368 <pre>
2369 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider {
2370 //...
2371 private page = 0;
2372
2373 @Override
2374 public void dispose() {
2375 if (fSdView != null) {
2376 fSdView.resetProviders();
2377 }
2378 }
2379
2380 @Override
2381 public void setViewer(SDView arg0) {
2382 fSdView = arg0;
2383 fSdView.setSDPagingProvider(this);
2384 createFrame();
2385 }
2386
2387 private void createSecondFrame() {
2388 Frame testFrame = new Frame();
2389 testFrame.setName("SecondFrame");
2390 Lifeline lifeline = new Lifeline();
2391 lifeline.setName("LifeLine 0");
2392 testFrame.addLifeLine(lifeline);
2393 lifeline = new Lifeline();
2394 lifeline.setName("LifeLine 1");
2395 testFrame.addLifeLine(lifeline);
2396 for (int i = 1; i < 5; i++) {
2397 SyncMessage message = new SyncMessage();
2398 message.autoSetStartLifeline(testFrame.getLifeline(0));
2399 message.autoSetEndLifeline(testFrame.getLifeline(0));
2400 message.setName((new StringBuilder("Message ")).append(i).toString());
2401 testFrame.addMessage(message);
2402
2403 SyncMessageReturn messageReturn = new SyncMessageReturn();
2404 messageReturn.autoSetStartLifeline(testFrame.getLifeline(0));
2405 messageReturn.autoSetEndLifeline(testFrame.getLifeline(0));
2406
2407 testFrame.addMessage(messageReturn);
2408 messageReturn.setName((new StringBuilder("Message return ")).append(i).toString());
2409 ExecutionOccurrence occ = new ExecutionOccurrence();
2410 occ.setStartOccurrence(testFrame.getSyncMessage(i - 1).getEventOccurrence());
2411 occ.setEndOccurrence(testFrame.getSyncMessageReturn(i - 1).getEventOccurrence());
2412 testFrame.getLifeline(0).addExecution(occ);
2413 }
2414 fSdView.setFrame(testFrame);
2415 }
2416
2417 @Override
2418 public boolean hasNextPage() {
2419 return page == 0;
2420 }
2421
2422 @Override
2423 public boolean hasPrevPage() {
2424 return page == 1;
2425 }
2426
2427 @Override
2428 public void nextPage() {
2429 page = 1;
2430 createSecondFrame();
2431 }
2432
2433 @Override
2434 public void prevPage() {
2435 page = 0;
2436 createFrame();
2437 }
2438
2439 @Override
2440 public void firstPage() {
2441 page = 0;
2442 createFrame();
2443 }
2444
2445 @Override
2446 public void lastPage() {
2447 page = 1;
2448 createSecondFrame();
2449 }
2450 //...
2451 }
2452
2453 </pre>
2454
2455 When running the example application, new actions will be shown in the coolbar and the coolbar menu. <br>
2456
2457 [[Image:images/PageProviderAdded.png]]
2458
2459 <br><br>
2460 To use the advanced paging provider, the interface ''ISDAdvancePagingProvider'' has to be implemented. It extends the basic paging provider. The methods ''currentPage()'', ''pagesCount()'' and ''pageNumberChanged()'' have to be added.
2461 <br>
2462
2463 ==== Using the Find Provider Interface ====
2464
2465 For finding nodes in a sequence diagram two interfaces exists. One for basic finding and one for extended finding. The basic find comes with a dialog box for entering find criteria as regular expressions. This find criteria can be used to execute the find. Find criteria a persisted in the Eclipse workspace.
2466 <br>
2467 For the extended find provider interface a ''org.eclipse.jface.action.Action'' class has to be provided. The actual find handling has to be implemented and triggered by the action.
2468 <br>
2469 Only on at a time can be active. If the extended find provder is defined it obsoletes the basic find provider.
2470 <br>
2471 To use the basic find provider, first the interface methods of the ''ISDFindProvider'' have to be implemented by a class. Typically, this is implemented in the loader class. Add the ISDFindProvider to the list of implemented interfaces, implement the methods ''find()'' and ''cancel()'' and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. Please note that the ''ISDFindProvider'' extends the interface ''ISDGraphNodeSupporter'' which methods (''isNodeSupported()'' and ''getNodeName()'') have to be implemented, too. The following shows an example implementation. Please note that only search for lifelines and SynchMessage are supported. The find itself will always find only the first occurrence the pattern to match.
2472
2473 <pre>
2474 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider, ISDFindProvider {
2475
2476 //...
2477 @Override
2478 public void dispose() {
2479 if (fSdView != null) {
2480 fSdView.resetProviders();
2481 }
2482 }
2483
2484 @Override
2485 public void setViewer(SDView arg0) {
2486 fSdView = arg0;
2487 fSdView.setSDPagingProvider(this);
2488 fSdView.setSDFindProvider(this);
2489 createFrame();
2490 }
2491
2492 @Override
2493 public boolean isNodeSupported(int nodeType) {
2494 switch (nodeType) {
2495 case ISDGraphNodeSupporter.LIFELINE:
2496 case ISDGraphNodeSupporter.SYNCMESSAGE:
2497 return true;
2498
2499 default:
2500 break;
2501 }
2502 return false;
2503 }
2504
2505 @Override
2506 public String getNodeName(int nodeType, String loaderClassName) {
2507 switch (nodeType) {
2508 case ISDGraphNodeSupporter.LIFELINE:
2509 return "Lifeline";
2510 case ISDGraphNodeSupporter.SYNCMESSAGE:
2511 return "Sync Message";
2512 }
2513 return "";
2514 }
2515
2516 @Override
2517 public boolean find(Criteria criteria) {
2518 Frame frame = fSdView.getFrame();
2519 if (criteria.isLifeLineSelected()) {
2520 for (int i = 0; i < frame.lifeLinesCount(); i++) {
2521 if (criteria.matches(frame.getLifeline(i).getName())) {
2522 fSdView.getSDWidget().moveTo(frame.getLifeline(i));
2523 return true;
2524 }
2525 }
2526 }
2527 if (criteria.isSyncMessageSelected()) {
2528 for (int i = 0; i < frame.syncMessageCount(); i++) {
2529 if (criteria.matches(frame.getSyncMessage(i).getName())) {
2530 fSdView.getSDWidget().moveTo(frame.getSyncMessage(i));
2531 return true;
2532 }
2533 }
2534 }
2535 return false;
2536 }
2537
2538 @Override
2539 public void cancel() {
2540 // reset find parameters
2541 }
2542 //...
2543 }
2544 </pre>
2545
2546 When running the example application, the find action will be shown in the coolbar and the coolbar menu. <br>
2547 [[Image:images/FindProviderAdded.png]]
2548
2549 To find a sequence diagram node press on the find button of the coolbar (see above). A new dialog box will open. Enter a regular expression in the ''Matching String'' text box, select the node types (e.g. Sync Message) and press '''Find'''. If found the corresponding node will be selected. If not found the dialog box will indicate not found. <br>
2550 [[Image:images/FindDialog.png]]<br>
2551
2552 Note that the find dialog will be opened by typing the key shortcut CRTL+F.
2553
2554 ==== Using the Filter Provider Interface ====
2555
2556 For filtering of sequence diagram elements two interfaces exist. One basic for filtering and one for extended filtering. The basic filtering comes with two dialog for entering filter criteria as regular expressions and one for selecting the filter to be used. Multiple filters can be active at a time. Filter criteria are persisted in the Eclipse workspace.
2557 <br>
2558 To use the basic filter provider, first the interface method of the ''ISDFilterProvider'' has to be implemented by a class. Typically, this is implemented in the loader class. Add the ''ISDFilterProvider'' to the list of implemented interfaces, implement the method ''filter()''and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. Please note that the ''ISDFindProvider'' extends the interface ''ISDGraphNodeSupporter'' which methods (''isNodeSupported()'' and ''getNodeName()'') have to be implemented, too. <br>
2559 Note that no example implementation of ''filter()'' is provided.
2560 <br>
2561
2562 <pre>
2563 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider, ISDFindProvider, ISDFilterProvider {
2564
2565 //...
2566 @Override
2567 public void dispose() {
2568 if (fSdView != null) {
2569 fSdView.resetProviders();
2570 }
2571 }
2572
2573 @Override
2574 public void setViewer(SDView arg0) {
2575 fSdView = arg0;
2576 fSdView.setSDPagingProvider(this);
2577 fSdView.setSDFindProvider(this);
2578 fSdView.setSDFilterProvider(this);
2579 createFrame();
2580 }
2581
2582 @Override
2583 public boolean filter(List<?> list) {
2584 return false;
2585 }
2586 //...
2587 }
2588 </pre>
2589
2590 When running the example application, the filter action will be shown in the coolbar menu. <br>
2591 [[Image:images/HidePatternsMenuItem.png]]
2592
2593 To filter select the '''Hide Patterns...''' of the coolbar menu. A new dialog box will open. <br>
2594 [[Image:images/DialogHidePatterns.png]]
2595
2596 To Add a new filter press '''Add...'''. A new dialog box will open. Enter a regular expression in the ''Matching String'' text box, select the node types (e.g. Sync Message) and press '''Create''''. <br>
2597 [[Image:images/DialogHidePatterns.png]] <br>
2598
2599 Now back at the Hide Pattern dialog. Select one or more filter and select '''OK'''.
2600
2601 To use the extended filter provider, the interface ''ISDExtendedFilterProvider'' has to be implemented. It will provide a ''org.eclipse.jface.action.Action'' class containing the actual filter handling and filter algorithm.
2602
2603 ==== Using the Extended Action Bar Provider Interface ====
2604
2605 The extended action bar provider can be used to add customized actions to the Sequence Diagram View.
2606 To use the extended action bar provider, first the interface method of the interface ''ISDExtendedActionBarProvider'' has to be implemented by a class. Typically, this is implemented in the loader class. Add the ''ISDExtendedActionBarProvider'' to the list of implemented interfaces, implement the method ''supplementCoolbarContent()'' and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. <br>
2607
2608 <pre>
2609 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider, ISDFindProvider, ISDFilterProvider, ISDExtendedActionBarProvider {
2610 //...
2611
2612 @Override
2613 public void dispose() {
2614 if (fSdView != null) {
2615 fSdView.resetProviders();
2616 }
2617 }
2618
2619 @Override
2620 public void setViewer(SDView arg0) {
2621 fSdView = arg0;
2622 fSdView.setSDPagingProvider(this);
2623 fSdView.setSDFindProvider(this);
2624 fSdView.setSDFilterProvider(this);
2625 fSdView.setSDExtendedActionBarProvider(this);
2626 createFrame();
2627 }
2628
2629 @Override
2630 public void supplementCoolbarContent(IActionBars iactionbars) {
2631 Action action = new Action("Refresh") {
2632 @Override
2633 public void run() {
2634 System.out.println("Refreshing...");
2635 }
2636 };
2637 iactionbars.getMenuManager().add(action);
2638 iactionbars.getToolBarManager().add(action);
2639 }
2640 //...
2641 }
2642 </pre>
2643
2644 When running the example application, all new actions will be added to the coolbar and coolbar menu according to the implementation of ''supplementCoolbarContent()''<br>.
2645 For the example above the coolbar and coolbar menu will look as follows.
2646
2647 [[Image:images/SupplCoolbar.png]]
2648
2649 ==== Using the Properties Provider Interface====
2650
2651 This interface can be used to provide property information. A property provider which returns an ''IPropertyPageSheet'' (see ''org.eclipse.ui.views'') has to be implemented and set in the Sequence Diagram View. <br>
2652
2653 To use the property provider, first the interface method of the ''ISDPropertiesProvider'' has to be implemented by a class. Typically, this is implemented in the loader class. Add the ''ISDPropertiesProvider'' to the list of implemented interfaces, implement the method ''getPropertySheetEntry()'' and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. Please note that no example is provided here.
2654
2655 Please refer to the following Eclipse articles for more information about properties and tabed properties.
2656 *[http://www.eclipse.org/articles/Article-Properties-View/properties-view.html | Take control of your properties]
2657 *[http://www.eclipse.org/articles/Article-Tabbed-Properties/tabbed_properties_view.html | The Eclipse Tabbed Properties View]
2658
2659 ==== Using the Collapse Provider Interface ====
2660
2661 This interface can be used to define a provider which responsibility is to collapse two selected lifelines. This can be used to hide a pair of lifelines.
2662
2663 To use the collapse provider, first the interface method of the ''ISDCollapseProvider'' has to be implemented by a class. Typically, this is implemented in the loader class. Add the ISDCollapseProvider to the list of implemented interfaces, implement the method ''collapseTwoLifelines()'' and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. Please note that no example is provided here.
2664
2665 ==== Using the Selection Provider Service ====
2666
2667 The Sequence Diagram View comes with a build in selection provider service. To this service listeners can be added. To use the selection provider service, the interface ''ISelectionListener'' of plug-in ''org.eclipse.ui'' has to implemented. Typically this is implemented in loader class. Firstly, add the ''ISelectionListener'' interface to the list of implemented interfaces, implement the method ''selectionChanged()'' and set the listener in method ''setViewer()'' as well as remove the listener in the ''dispose()'' method of the loader class.
2668
2669 <pre>
2670 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider, ISDFindProvider, ISDFilterProvider, ISDExtendedActionBarProvider, ISelectionListener {
2671
2672 //...
2673 @Override
2674 public void dispose() {
2675 if (fSdView != null) {
2676 PlatformUI.getWorkbench().getActiveWorkbenchWindow().getSelectionService().removePostSelectionListener(this);
2677 fSdView.resetProviders();
2678 }
2679 }
2680
2681 @Override
2682 public String getTitleString() {
2683 return "Sample Diagram";
2684 }
2685
2686 @Override
2687 public void setViewer(SDView arg0) {
2688 fSdView = arg0;
2689 PlatformUI.getWorkbench().getActiveWorkbenchWindow().getSelectionService().addPostSelectionListener(this);
2690 fSdView.setSDPagingProvider(this);
2691 fSdView.setSDFindProvider(this);
2692 fSdView.setSDFilterProvider(this);
2693 fSdView.setSDExtendedActionBarProvider(this);
2694
2695 createFrame();
2696 }
2697
2698 @Override
2699 public void selectionChanged(IWorkbenchPart part, ISelection selection) {
2700 ISelection sel = PlatformUI.getWorkbench().getActiveWorkbenchWindow().getSelectionService().getSelection();
2701 if (sel != null && (sel instanceof StructuredSelection)) {
2702 StructuredSelection stSel = (StructuredSelection) sel;
2703 if (stSel.getFirstElement() instanceof BaseMessage) {
2704 BaseMessage syncMsg = ((BaseMessage) stSel.getFirstElement());
2705 System.out.println("Message '" + syncMsg.getName() + "' selected.");
2706 }
2707 }
2708 }
2709
2710 //...
2711 }
2712 </pre>
2713
2714 === Printing a Sequence Diagram ===
2715
2716 To print a the whole sequence diagram or only parts of it, select the Sequence Diagram View and select '''File -> Print...''' or type the key combination ''CTRL+P''. A new print dialog will open. <br>
2717
2718 [[Image:images/PrintDialog.png]] <br>
2719
2720 Fill in all the relevant information, select '''Printer...''' to choose the printer and the press '''OK'''.
2721
2722 === Using one Sequence Diagram View with Multiple Loaders ===
2723
2724 A Sequence Diagram View definition can be used with multiple sequence diagram loaders. However, the active loader to be used when opening the view has to be set. For this define an Eclipse action or command and assign the current loader to the view. Here is a code snippet for that:
2725
2726 <pre>
2727 public class OpenSDView extends AbstractHandler {
2728 @Override
2729 public Object execute(ExecutionEvent event) throws ExecutionException {
2730 try {
2731 IWorkbenchPage persp = TmfUiPlugin.getDefault().getWorkbench().getActiveWorkbenchWindow().getActivePage();
2732 SDView view = (SDView) persp.showView("org.eclipse.linuxtools.ust.examples.ui.componentinteraction");
2733 LoadersManager.getLoadersManager().createLoader("org.eclipse.linuxtools.tmf.ui.views.uml2sd.impl.TmfUml2SDSyncLoader", view);
2734 } catch (PartInitException e) {
2735 throw new ExecutionException("PartInitException caught: ", e);
2736 }
2737 return null;
2738 }
2739 }
2740 </pre>
2741
2742 === Downloading the Tutorial ===
2743
2744 Use the following link to download the source code of the tutorial [http://wiki.eclipse.org/images/e/e6/SamplePlugin.zip Plug-in of Tutorial].
2745
2746 == Integration of Tracing and Monitoring Framework with Sequence Diagram Framework ==
2747
2748 In the previous sections the Sequence Diagram Framework has been described and a tutorial was provided. In the following sections the integration of the Sequence Diagram Framework with other features of TMF will be described. Together it is a powerful framework to analyze and visualize content of traces. The integration is explained using the reference implementation of a UML2 sequence diagram loader which part of the TMF UI delivery. The reference implementation can be used as is, can be sub-classed or simply be an example for other sequence diagram loaders to be implemented.
2749
2750 === Reference Implementation ===
2751
2752 A Sequence Diagram View Extension is defined in the plug-in TMF UI as well as a uml2SDLoader Extension with the reference loader.
2753
2754 [[Image:images/ReferenceExtensions.png]]
2755
2756 === Used Sequence Diagram Features ===
2757
2758 Besides the default features of the Sequence Diagram Framework, the reference implementation uses the following additional features:
2759 *Advanced paging
2760 *Basic finding
2761 *Basic filtering
2762 *Selection Service
2763
2764 ==== Advanced paging ====
2765
2766 The reference loader implements the interface ''ISDAdvancedPagingProvider'' interface. Please refer to section [[#Using the Paging Provider Interface | Using the Paging Provider Interface]] for more details about the advanced paging feature.
2767
2768 ==== Basic finding ====
2769
2770 The reference loader implements the interface ''ISDFindProvider'' interface. The user can search for ''Lifelines'' and ''Interactions''. The find is done across pages. If the expression to match is not on the current page a new thread is started to search on other pages. If expression is found the corresponding page is shown as well as the searched item is displayed. If not found then a message is displayed in the ''Progress View'' of Eclipse. Please refer to section [[#Using the Find Provider Interface | Using the Find Provider Interface]] for more details about the basic find feature.
2771
2772 ==== Basic filtering ====
2773
2774 The reference loader implements the interface ''ISDFilterProvider'' interface. The user can filter on ''Lifelines'' and ''Interactions''. Please refer to section [[#Using the Filter Provider Interface | Using the Filter Provider Interface]] for more details about the basic filter feature.
2775
2776 ==== Selection Service ====
2777
2778 The reference loader implements the interface ''ISelectionListener'' interface. When an interaction is selected a ''TmfTimeSynchSignal'' is broadcast (see [[#TMF Signal Framework | TMF Signal Framework]]). Please also refer to section [[#Using the Selection Provider Service | Using the Selection Provider Service]] for more details about the selection service and .
2779
2780 === Used TMF Features ===
2781
2782 The reference implementation uses the following features of TMF:
2783 *TMF Experiment and Trace for accessing traces
2784 *Event Request Framework to request TMF events from the experiment and respective traces
2785 *Signal Framework for broadcasting and receiving TMF signals for synchronization purposes
2786
2787 ==== TMF Experiment and Trace for accessing traces ====
2788
2789 The reference loader uses TMF Experiments to access traces and to request data from the traces.
2790
2791 ==== TMF Event Request Framework ====
2792
2793 The reference loader use the TMF Event Request Framework to request events from the experiment and its traces.
2794
2795 When opening a traces (which is triggered by signal ''TmfExperimentSelected'') or when opening the Sequence Diagram View after a trace had been opened previously, a TMF background request is initiated to index the trace and to fill in the first page of the sequence diagram. The purpose of the indexing is to store time ranges for pages with 10000 messages per page. This allows quickly to move to certain pages in a trace without having to re-parse from the beginning. The request is called indexing request.
2796
2797 When switching pages, the a TMF foreground event request is initiated to retrieve the corresponding events from the experiment. It uses the time range stored in the index for the respective page.
2798
2799 A third type of event request is issued for finding specific data across pages.
2800
2801 ==== TMF Signal Framework ====
2802
2803 The reference loader extends the class ''TmfComponent''. By doing that the loader is registered as a TMF signal handler for sending and receiving TMF signals. The loader implements signal handlers for the following TMF signals:
2804 *''TmfTraceSelectedSignal''
2805 This signal indicates that a trace or experiment was selected. When receiving this signal the indexing request is initiated and the first page is displayed after receiving the relevant information.
2806 *''TmfTraceClosedSignal''
2807 This signal indicates that a trace or experiment was closed. When receiving this signal the loader resets its data and a blank page is loaded in the Sequence Diagram View.
2808 *''TmfTimeSynchSignal''
2809 This signal is used to indicate that a new time or time range has been selected. It contains a begin and end time. If a single time is selected then the begin and end time are the same. When receiving this signal the corresponding message matching the begin time is selected in the Sequence Diagram View. If necessary, the page is changed.
2810 *''TmfRangeSynchSignal''
2811 This signal indicates that a new time range is in focus. When receiving this signal the loader loads the page which corresponds to the start time of the time range signal. The message with the start time will be in focus.
2812
2813 Besides acting on receiving signals, the reference loader is also sending signals. A ''TmfTimeSynchSignal'' is broadcasted with the timestamp of the message which was selected in the Sequence Diagram View. ''TmfRangeSynchSignal'' is sent when a page is changed in the Sequence Diagram View. The start timestamp of the time range sent is the timestamp of the first message. The end timestamp sent is the timestamp of the first message plus the current time range window. The current time range window is the time window that was indicated in the last received ''TmfRangeSynchSignal''.
2814
2815 === Supported Traces ===
2816
2817 The reference implementation is able to analyze traces from a single component that traces the interaction with other components. For example, a server node could have trace information about its interaction with client nodes. The server node could be traced and then analyzed using TMF and the Sequence Diagram Framework of TMF could used to visualize the interactions with the client nodes.<br>
2818
2819 Note that combined traces of multiple components, that contain the trace information about the same interactions are not supported in the reference implementation!
2820
2821 === Trace Format ===
2822
2823 The reference implementation in class ''TmfUml2SDSyncLoader'' in package ''org.eclipse.linuxtools.tmf.ui.views.uml2sd.impl'' analyzes events from type ''ITmfEvent'' and creates events type ''ITmfSyncSequenceDiagramEvent'' if the ''ITmfEvent'' contains all relevant information information. The parsing algorithm looks like as follows:
2824
2825 <pre>
2826 /**
2827 * @param tmfEvent Event to parse for sequence diagram event details
2828 * @return sequence diagram event if details are available else null
2829 */
2830 protected ITmfSyncSequenceDiagramEvent getSequenceDiagramEvent(ITmfEvent tmfEvent){
2831 //type = .*RECEIVE.* or .*SEND.*
2832 //content = sender:<sender name>:receiver:<receiver name>,signal:<signal name>
2833 String eventType = tmfEvent.getType().toString();
2834 if (eventType.contains(Messages.TmfUml2SDSyncLoader_EventTypeSend) || eventType.contains(Messages.TmfUml2SDSyncLoader_EventTypeReceive)) {
2835 Object sender = tmfEvent.getContent().getField(Messages.TmfUml2SDSyncLoader_FieldSender);
2836 Object receiver = tmfEvent.getContent().getField(Messages.TmfUml2SDSyncLoader_FieldReceiver);
2837 Object name = tmfEvent.getContent().getField(Messages.TmfUml2SDSyncLoader_FieldSignal);
2838 if ((sender instanceof ITmfEventField) && (receiver instanceof ITmfEventField) && (name instanceof ITmfEventField)) {
2839 ITmfSyncSequenceDiagramEvent sdEvent = new TmfSyncSequenceDiagramEvent(tmfEvent,
2840 ((ITmfEventField) sender).getValue().toString(),
2841 ((ITmfEventField) receiver).getValue().toString(),
2842 ((ITmfEventField) name).getValue().toString());
2843
2844 return sdEvent;
2845 }
2846 }
2847 return null;
2848 }
2849 </pre>
2850
2851 The analysis looks for event type Strings containing ''SEND'' and ''RECEIVE''. If event type matches these key words, the analyzer will look for strings ''sender'', ''receiver'' and ''signal'' in the event fields of type ''ITmfEventField''. If all the data is found a sequence diagram event can be created using this information. Note that Sync Messages are assumed, which means start and end time are the same.
2852
2853 === How to use the Reference Implementation ===
2854
2855 An example CTF (Common Trace Format) trace is provided that contains trace events with sequence diagram information. To download the reference trace, use the following link: [https://wiki.eclipse.org/images/3/35/ReferenceTrace.zip Reference Trace].
2856
2857 Run an Eclipse application with TMF 3.0 or later installed. To open the Reference Sequence Diagram View, select '''Windows -> Show View -> Other... -> TMF -> Sequence Diagram''' <br>
2858 [[Image:images/ShowTmfSDView.png]]<br>
2859
2860 A blank Sequence Diagram View will open.
2861
2862 Then import the reference trace to the '''Project Explorer''' using the '''Import Trace Package...''' menu option.<br>
2863 [[Image:images/ImportTracePackage.png]]
2864
2865 Next, open the trace by double-clicking on the trace element in the '''Project Explorer'''. The trace will be opened and the Sequence Diagram view will be filled.
2866 [[Image:images/ReferenceSeqDiagram.png]]<br>
2867
2868 Now the reference implementation can be explored. To demonstrate the view features try the following things:
2869 *Select a message in the Sequence diagram. As result the corresponding event will be selected in the Events View.
2870 *Select an event in the Events View. As result the corresponding message in the Sequence Diagram View will be selected. If necessary, the page will be changed.
2871 *In the Events View, press key ''End''. As result, the Sequence Diagram view will jump to the last page.
2872 *In the Events View, press key ''Home''. As result, the Sequence Diagram view will jump to the first page.
2873 *In the Sequence Diagram View select the find button. Enter the expression '''REGISTER.*''', select '''Search for Interaction''' and press '''Find'''. As result the corresponding message will be selected in the Sequence Diagram and the corresponding event in the Events View will be selected. Select again '''Find''' the next occurrence of will be selected. Since the second occurrence is on a different page than the first, the corresponding page will be loaded.
2874 * In the Sequence Diagram View, select menu item '''Hide Patterns...'''. Add the filter '''BALL.*''' for '''Interaction''' only and select '''OK'''. As result all messages with name ''BALL_REQUEST'' and ''BALL_REPLY'' will be hidden. To remove the filter, select menu item '''Hide Patterns...''', deselect the corresponding filter and press '''OK'''. All the messages will be shown again.<br>
2875
2876 === Extending the Reference Loader ===
2877
2878 In some case it might be necessary to change the implementation of the analysis of each ''TmfEvent'' for the generation of ''Sequence Diagram Events''. For that just extend the class ''TmfUml2SDSyncLoader'' and overwrite the method ''protected ITmfSyncSequenceDiagramEvent getSequnceDiagramEvent(TmfEvent tmfEvent)'' with your own implementation.
2879
2880 = CTF Parser =
2881
2882 == CTF Format ==
2883 CTF is a format used to store traces. It is self defining, binary and made to be easy to write to.
2884 Before going further, the full specification of the CTF file format can be found at http://www.efficios.com/ .
2885
2886 For the purpose of the reader some basic description will be given. A CTF trace typically is made of several files all in the same folder.
2887
2888 These files can be split into two types :
2889 * Metadata
2890 * Event streams
2891
2892 === Metadata ===
2893 The metadata is either raw text or packetized text. It is tsdl encoded. it contains a description of the type of data in the event streams. It can grow over time if new events are added to a trace but it will never overwrite what is already there.
2894
2895 === Event Streams ===
2896 The event streams are a file per stream per cpu. These streams are binary and packet based. The streams store events and event information (ie lost events) The event data is stored in headers and field payloads.
2897
2898 So if you have two streams (channels) "channel1" and "channel2" and 4 cores, you will have the following files in your trace directory: "channel1_0" , "channel1_1" , "channel1_2" , "channel1_3" , "channel2_0" , "channel2_1" , "channel2_2" & "channel2_3"
2899
2900 == Reading a trace ==
2901 In order to read a CTF trace, two steps must be done.
2902 * The metadata must be read to know how to read the events.
2903 * the events must be read.
2904
2905 The metadata is a written in a subset of the C language called TSDL. To read it, first it is depacketized (if it is not in plain text) then the raw text is parsed by an antlr grammer. The parsing is done in two phases. There is a lexer (CTFLexer.g) which separated the metatdata text into tokens. The tokens are then pattern matched using the parser (CTFParser.g) to form an AST. This AST is walked through using "IOStructGen.java" to populate streams and traces in trace parent object.
2906
2907 When the metadata is loaded and read, the trace object will be populated with 3 items:
2908 * the event definitions available per stream: a definition is a description of the datatype.
2909 * the event declarations available per stream: this will save declaration creation on a per event basis. They will all be created in advance, just not populated.
2910 * the beginning of a packet index.
2911
2912 Now all the trace readers for the event streams have everything they need to read a trace. They will each point to one file, and read the file from packet to packet. Everytime the trace reader changes packet, the index is updated with the new packet's information. The readers are in a priority queue and sorted by timestamp. This ensures that the events are read in a sequential order. They are also sorted by file name so that in the eventuality that two events occur at the same time, they stay in the same order.
2913
2914 == Seeking in a trace ==
2915 The reason for maintaining an index is to speed up seeks. In the case that a user wishes to seek to a certain timestamp, they just have to find the index entry that contains the timestamp, and go there to iterate in that packet until the proper event is found. this will reduce the searches time by an order of 8000 for a 256k paket size (kernel default).
2916
2917 == Interfacing to TMF ==
2918 The trace can be read easily now but the data is still awkward to extract.
2919
2920 === CtfLocation ===
2921 A location in a given trace, it is currently the timestamp of a trace and the index of the event. The index shows for a given timestamp if it is the first second or nth element.
2922
2923 === CtfTmfTrace ===
2924 The CtfTmfTrace is a wrapper for the standard CTF trace that allows it to perform the following actions:
2925 * '''initTrace()''' create a trace
2926 * '''validateTrace()''' is the trace a CTF trace?
2927 * '''getLocationRatio()''' how far in the trace is my location?
2928 * '''seekEvent()''' sets the cursor to a certain point in a trace.
2929 * '''readNextEvent()''' reads the next event and then advances the cursor
2930 * '''getTraceProperties()''' gets the 'env' structures of the metadata
2931
2932 === CtfIterator ===
2933 The CtfIterator is a wrapper to the CTF file reader. It behaves like an iterator on a trace. However, it contains a file pointer and thus cannot be duplicated too often or the system will run out of file handles. To alleviate the situation, a pool of iterators is created at the very beginning and stored in the CtfTmfTrace. They can be retried by calling the GetIterator() method.
2934
2935 === CtfIteratorManager ===
2936 Since each CtfIterator will have a file reader, the OS will run out of handles if too many iterators are spawned. The solution is to use the iterator manager. This will allow the user to get an iterator. If there is a context at the requested position, the manager will return that one, if not, a context will be selected at random and set to the correct location. Using random replacement minimizes contention as it will settle quickly at a new balance point.
2937
2938 === CtfTmfContext ===
2939 The CtfTmfContext implements the ITmfContext type. It is the CTF equivalent of TmfContext. It has a CtfLocation and points to an iterator in the CtfTmfTrace iterator pool as well as the parent trace. it is made to be cloned easily and not affect system resources much. Contexts behave much like C file pointers (FILE*) but they can be copied until one runs out of RAM.
2940
2941 === CtfTmfTimestamp ===
2942 The CtfTmfTimestamp take a CTF time (normally a long int) and outputs the time formats it as a TmfTimestamp, allowing it to be compared to other timestamps. The time is stored with the UTC offset already applied. It also features a simple toString() function that allows it to output the time in more Human readable ways: "yyyy/mm/dd/hh:mm:ss.nnnnnnnnn ns" for example. An additional feature is the getDelta() function that allows two timestamps to be substracted, showing the time difference between A and B.
2943
2944 === CtfTmfEvent ===
2945 The CtfTmfEvent is an ITmfEvent that is used to wrap event declarations and event definitions from the CTF side into easier to read and parse chunks of information. It is a final class with final fields made to be newed very often without incurring performance costs. Most of the information is already available. It should be noted that one type of event can appear called "lost event" these are synthetic events that do not exist in the trace. They will not appear in other trace readers such as babeltrace.
2946
2947 === Other ===
2948 There are other helper files that format given events for views, they are simpler and the architecture does not depend on them.
2949
2950 === Limitations ===
2951 For the moment live trace reading is not supported, there are no sources of traces to test on.
2952
2953 = Event matching and trace synchronization =
2954
2955 Event matching consists in taking an event from a trace and linking it to another event in a possibly different trace. The example that comes to mind is matching network packets sent from one traced machine to another traced machine. These matches can be used to synchronize traces.
2956
2957 Trace synchronization consists in taking traces, taken on different machines, with a different time reference, and finding the formula to transform the timestamps of some of the traces, so that they all have the same time reference.
2958
2959 == Event matching interfaces ==
2960
2961 Here's a description of the major parts involved in event matching. These classes are all in the ''org.eclipse.linuxtools.tmf.core.event.matching'' package:
2962
2963 * '''ITmfEventMatching''': Controls the event matching process
2964 * '''ITmfMatchEventDefinition''': Describes how events are matched
2965 * '''IMatchProcessingUnit''': Processes the matched events
2966
2967 == Implementation details and how to extend it ==
2968
2969 === ITmfEventMatching interface and derived classes ===
2970
2971 This interface and its default abstract implementation '''TmfEventMatching''' control the event matching itself. Their only public method is ''matchEvents''. The class needs to manage how to setup the traces, and any initialization or finalization procedures.
2972
2973 The abstract class generates an event request for each trace from which events are matched and waits for the request to complete before calling the one from another trace. The ''handleData'' method from the request calls the ''matchEvent'' method that needs to be implemented in children classes.
2974
2975 Class '''TmfNetworkEventMatching''' is a concrete implementation of this interface. It applies to all use cases where a ''in'' event can be matched with a ''out' event (''in'' and ''out'' can be the same event, with different data). It creates a '''TmfEventDependency''' between the source and destination events. The dependency is added to the processing unit.
2976
2977 To match events requiring other mechanisms (for instance, a series of events can be matched with another series of events), one would need to implement another class either extending '''TmfEventMatching''' or implementing '''ITmfEventMatching'''. It would most probably also require a new '''ITmfMatchEventDefinition''' implementation.
2978
2979 === ITmfMatchEventDefinition interface and its derived classes ===
2980
2981 These are the classes that describe how to actually match specific events together.
2982
2983 The '''canMatchTrace''' method will tell if a definition is compatible with a given trace.
2984
2985 The '''getUniqueField''' method will return a list of field values that uniquely identify this event and can be used to find a previous event to match with.
2986
2987 Typically, there would be a match definition abstract class/interface per event matching type.
2988
2989 The interface '''ITmfNetworkMatchDefinition''' adds the ''getDirection'' method to indicate whether this event is a ''in'' or ''out'' event to be matched with one from the opposite direction.
2990
2991 As examples, two concrete network match definitions have been implemented in the ''org.eclipse.linuxtools.lttng2.kernel.core.event.matching'' package for two compatible methods of matching TCP packets (See the LTTng User Guide on ''trace synchronization'' for information on those matching methods). Each one tells which events need to be present in the metadata of a CTF trace for this matching method to be applicable. It also returns the field values from each event that will uniquely match 2 events together.
2992
2993 === IMatchProcessingUnit interface and derived classes ===
2994
2995 While matching events is an exercice in itself, it's what to do with the match that really makes this functionality interesting. This is the job of the '''IMatchProcessingUnit''' interface.
2996
2997 '''TmfEventMatches''' provides a default implementation that only stores the matches to count them. When a new match is obtained, the ''addMatch'' is called with the match and the processing unit can do whatever needs to be done with it.
2998
2999 A match processing unit can be an analysis in itself. For example, trace synchronization is done through such a processing unit. One just needs to set the processing unit in the TmfEventMatching constructor.
3000
3001 == Code examples ==
3002
3003 === Using network packets matching in an analysis ===
3004
3005 This example shows how one can create a processing unit inline to create a link between two events. In this example, the code already uses an event request, so there is no need here to call the ''matchEvents'' method, that will only create another request.
3006
3007 <pre>
3008 class MyAnalysis extends TmfAbstractAnalysisModule {
3009
3010 private TmfNetworkEventMatching tcpMatching;
3011
3012 ...
3013
3014 protected void executeAnalysis() {
3015
3016 IMatchProcessingUnit matchProcessing = new IMatchProcessingUnit() {
3017 @Override
3018 public void matchingEnded() {
3019 }
3020
3021 @Override
3022 public void init(ITmfTrace[] fTraces) {
3023 }
3024
3025 @Override
3026 public int countMatches() {
3027 return 0;
3028 }
3029
3030 @Override
3031 public void addMatch(TmfEventDependency match) {
3032 log.debug("we got a tcp match! " + match.getSourceEvent().getContent() + " " + match.getDestinationEvent().getContent());
3033 TmfEvent source = match.getSourceEvent();
3034 TmfEvent destination = match.getDestinationEvent();
3035 /* Create a link between the two events */
3036 }
3037 };
3038
3039 ITmfTrace[] traces = { getTrace() };
3040 tcpMatching = new TmfNetworkEventMatching(traces, matchProcessing);
3041 tcpMatching.initMatching();
3042
3043 MyEventRequest request = new MyEventRequest(this, i);
3044 getTrace().sendRequest(request);
3045 }
3046
3047 public void analyzeEvent(TmfEvent event) {
3048 ...
3049 tcpMatching.matchEvent(event, 0);
3050 ...
3051 }
3052
3053 ...
3054
3055 }
3056
3057 class MyEventRequest extends TmfEventRequest {
3058
3059 private final MyAnalysis analysis;
3060
3061 MyEventRequest(MyAnalysis analysis, int traceno) {
3062 super(CtfTmfEvent.class,
3063 TmfTimeRange.ETERNITY,
3064 0,
3065 TmfDataRequest.ALL_DATA,
3066 ITmfDataRequest.ExecutionType.FOREGROUND);
3067 this.analysis = analysis;
3068 }
3069
3070 @Override
3071 public void handleData(final ITmfEvent event) {
3072 super.handleData(event);
3073 if (event != null) {
3074 analysis.analyzeEvent(event);
3075 }
3076 }
3077 }
3078 </pre>
3079
3080 === Match network events from UST traces ===
3081
3082 Suppose a client-server application is instrumented using LTTng-UST. Traces are collected on the server and some clients on different machines. The traces can be synchronized using network event matching.
3083
3084 The following metadata describes the events:
3085
3086 <pre>
3087 event {
3088 name = "myapp:send";
3089 id = 0;
3090 stream_id = 0;
3091 loglevel = 13;
3092 fields := struct {
3093 integer { size = 32; align = 8; signed = 1; encoding = none; base = 10; } _sendto;
3094 integer { size = 64; align = 8; signed = 1; encoding = none; base = 10; } _messageid;
3095 integer { size = 64; align = 8; signed = 1; encoding = none; base = 10; } _data;
3096 };
3097 };
3098
3099 event {
3100 name = "myapp:receive";
3101 id = 1;
3102 stream_id = 0;
3103 loglevel = 13;
3104 fields := struct {
3105 integer { size = 32; align = 8; signed = 1; encoding = none; base = 10; } _from;
3106 integer { size = 64; align = 8; signed = 1; encoding = none; base = 10; } _messageid;
3107 integer { size = 64; align = 8; signed = 1; encoding = none; base = 10; } _data;
3108 };
3109 };
3110 </pre>
3111
3112 One would need to write an event match definition for those 2 events as follows:
3113
3114 <pre>
3115 public class MyAppUstEventMatching implements ITmfNetworkMatchDefinition {
3116
3117 @Override
3118 public Direction getDirection(ITmfEvent event) {
3119 String evname = event.getType().getName();
3120 if (evname.equals("myapp:receive")) {
3121 return Direction.IN;
3122 } else if (evname.equals("myapp:send")) {
3123 return Direction.OUT;
3124 }
3125 return null;
3126 }
3127
3128 @Override
3129 public List<Object> getUniqueField(ITmfEvent event) {
3130 List<Object> keys = new ArrayList<Object>();
3131
3132 if (evname.equals("myapp:receive")) {
3133 keys.add(event.getContent().getField("from").getValue());
3134 keys.add(event.getContent().getField("messageid").getValue());
3135 } else {
3136 keys.add(event.getContent().getField("sendto").getValue());
3137 keys.add(event.getContent().getField("messageid").getValue());
3138 }
3139
3140 return keys;
3141 }
3142
3143 @Override
3144 public boolean canMatchTrace(ITmfTrace trace) {
3145 if (!(trace instanceof CtfTmfTrace)) {
3146 return false;
3147 }
3148 CtfTmfTrace ktrace = (CtfTmfTrace) trace;
3149 String[] events = { "myapp:receive", "myapp:send" };
3150 return ktrace.hasAtLeastOneOfEvents(events);
3151 }
3152
3153 @Override
3154 public MatchingType[] getApplicableMatchingTypes() {
3155 MatchingType[] types = { MatchingType.NETWORK };
3156 return types;
3157 }
3158
3159 }
3160 </pre>
3161
3162 Somewhere in code that will be executed at the start of the plugin (like in the Activator), the following code will have to be run:
3163
3164 <pre>
3165 TmfEventMatching.registerMatchObject(new MyAppUstEventMatching());
3166 </pre>
3167
3168 Now, only adding the traces in an experiment and clicking the '''Synchronize traces''' menu element would synchronize the traces using the new definition for event matching.
3169
3170 == Trace synchronization ==
3171
3172 Trace synchronization classes and interfaces are located in the ''org.eclipse.linuxtools.tmf.core.synchronization'' package.
3173
3174 === Synchronization algorithm ===
3175
3176 Synchronization algorithms are used to synchronize traces from events matched between traces. After synchronization, traces taken on different machines with different time references see their timestamps modified such that they all use the same time reference (typically, the time of at least one of the traces). With traces from different machines, it is impossible to have perfect synchronization, so the result is a best approximation that takes network latency into account.
3177
3178 The abstract class '''SynchronizationAlgorithm''' is a processing unit for matches. New synchronization algorithms must extend this one, it already contains the functions to get the timestamp transforms for different traces.
3179
3180 The ''fully incremental convex hull'' synchronization algorithm is the default synchronization algorithm.
3181
3182 While the synchronization system provisions for more synchronization algorithms, there is not yet a way to select one, the experiment's trace synchronization uses the default algorithm. To test a new synchronization algorithm, the synchronization should be called directly like this:
3183
3184 <pre>
3185 SynchronizationAlgorithm syncAlgo = new MyNewSynchronizationAlgorithm();
3186 syncAlgo = SynchronizationManager.synchronizeTraces(syncFile, traces, syncAlgo, true);
3187 </pre>
3188
3189 === Timestamp transforms ===
3190
3191 Timestamp transforms are the formulae used to transform the timestamps from a trace into the reference time. The '''ITmfTimestampTransform''' is the interface to implement to add a new transform.
3192
3193 The following classes implement this interface:
3194
3195 * '''TmfTimestampTransform''': default transform. It cannot be instantiated, it has a single static object TmfTimestampTransform.IDENTITY, which returns the original timestamp.
3196 * '''TmfTimestampTransformLinear''': transforms the timestamp using a linear formula: ''f(t) = at + b'', where ''a'' and ''b'' are computed by the synchronization algorithm.
3197
3198 One could extend the interface for other timestamp transforms, for instance to have a transform where the formula would change over the course of the trace.
3199
3200 == Todo ==
3201
3202 Here's a list of features not yet implemented that would enhance trace synchronization and event matching:
3203
3204 * Ability to select a synchronization algorithm
3205 * Implement a better way to select the reference trace instead of arbitrarily taking the first in alphabetical order (for instance, the minimum spanning tree algorithm by Masoume Jabbarifar (article on the subject not published yet))
3206 * Ability to join traces from the same host so that even if one of the traces is not synchronized with the reference trace, it will take the same timestamp transform as the one on the same machine.
3207 * Instead of having the timestamp transforms per trace, have the timestamp transform as part of an experiment context, so that the trace's specific analysis, like the state system, are in the original trace, but are transformed only when needed for an experiment analysis.
3208 * Add more views to display the synchronization information (only textual statistics are available for now)
3209
3210 = Analysis Framework =
3211
3212 Analysis modules are useful to tell the user exactly what can be done with a trace. The analysis framework provides an easy way to access and execute the modules and open the various outputs available.
3213
3214 Analyses can have parameters they can use in their code. They also have outputs registered to them to display the results from their execution.
3215
3216 == Creating a new module ==
3217
3218 All analysis modules must implement the '''IAnalysisModule''' interface from the o.e.l.tmf.core project. An abstract class, '''TmfAbstractAnalysisModule''', provides a good base implementation. It is strongly suggested to use it as a superclass of any new analysis.
3219
3220 === Example ===
3221
3222 This example shows how to add a simple analysis module for an LTTng kernel trace with two parameters.
3223
3224 <pre>
3225 public class MyLttngKernelAnalysis extends TmfAbstractAnalysisModule {
3226
3227 public static final String PARAM1 = "myparam";
3228 public static final String PARAM2 = "myotherparam";
3229
3230 @Override
3231 public boolean canExecute(ITmfTrace trace) {
3232 /* This just makes sure the trace is an Lttng kernel trace, though
3233 usually that should have been done by specifying the trace type
3234 this analysis module applies to */
3235 if (!LttngKernelTrace.class.isAssignableFrom(trace.getClass())) {
3236 return false;
3237 }
3238
3239 /* Does the trace contain the appropriate events? */
3240 String[] events = { "sched_switch", "sched_wakeup" };
3241 return ((LttngKernelTrace) trace).hasAllEvents(events);
3242 }
3243
3244 @Override
3245 protected void canceling() {
3246 /* The job I am running in is being cancelled, let's clean up */
3247 }
3248
3249 @Override
3250 protected boolean executeAnalysis(final IProgressMonitor monitor) {
3251 /*
3252 * I am running in an Eclipse job, and I already know I can execute
3253 * on a given trace.
3254 *
3255 * In the end, I will return true if I was successfully completed or
3256 * false if I was either interrupted or something wrong occurred.
3257 */
3258 Object param1 = getParameter(PARAM1);
3259 int param2 = (Integer) getParameter(PARAM2);
3260 }
3261
3262 @Override
3263 public Object getParameter(String name) {
3264 Object value = super.getParameter(name);
3265 /* Make sure the value of param2 is of the right type. For sake of
3266 simplicity, the full parameter format validation is not presented
3267 here */
3268 if ((value != null) && name.equals(PARAM2) && (value instanceof String)) {
3269 return Integer.parseInt((String) value);
3270 }
3271 return value;
3272 }
3273
3274 }
3275 </pre>
3276
3277 === Available base analysis classes and interfaces ===
3278
3279 The following are available as base classes for analysis modules. They also extend the abstract '''TmfAbstractAnalysisModule'''
3280
3281 * '''TmfStateSystemAnalysisModule''': A base analysis module that builds one state system. A module extending this class only needs to provide a state provider and the type of state system backend to use. All state systems should now use this base class as it also contains all the methods to actually create the state sytem with a given backend.
3282
3283 The following interfaces can optionally be implemented by analysis modules if they use their functionalities. For instance, some utility views, like the State System Explorer, may have access to the module's data through these interfaces.
3284
3285 * '''ITmfAnalysisModuleWithStateSystems''': Modules implementing this have one or more state systems included in them. For example, a module may "hide" 2 state system modules for its internal workings. By implementing this interface, it tells that it has state systems and can return them if required.
3286
3287 === How it works ===
3288
3289 Analyses are managed through the '''TmfAnalysisManager'''. The analysis manager is a singleton in the application and keeps track of all available analysis modules, with the help of '''IAnalysisModuleHelper'''. It can be queried to get the available analysis modules, either all of them or only those for a given tracetype. The helpers contain the non-trace specific information on an analysis module: its id, its name, the tracetypes it applies to, etc.
3290
3291 When a trace is opened, the helpers for the applicable analysis create new instances of the analysis modules. The analysis are then kept in a field of the trace and can be executed automatically or on demand.
3292
3293 The analysis is executed by calling the '''IAnalysisModule#schedule()''' method. This method makes sure the analysis is executed only once and, if it is already running, it won't start again. The analysis itself is run inside an Eclipse job that can be cancelled by the user or the application. The developer must consider the progress monitor that comes as a parameter of the '''executeAnalysis()''' method, to handle the proper cancellation of the processing. The '''IAnalysisModule#waitForCompletion()''' method will block the calling thread until the analysis is completed. The method will return whether the analysis was successfully completed or if it was cancelled.
3294
3295 A running analysis can be cancelled by calling the '''IAnalysisModule#cancel()''' method. This will set the analysis as done, so it cannot start again unless it is explicitly reset. This is done by calling the protected method '''resetAnalysis'''.
3296
3297 == Telling TMF about the analysis module ==
3298
3299 Now that the analysis module class exists, it is time to hook it to the rest of TMF so that it appears under the traces in the project explorer. The way to do so is to add an extension of type ''org.eclipse.linuxtools.tmf.core.analysis'' to a plugin, either through the ''Extensions'' tab of the Plug-in Manifest Editor or by editing directly the plugin.xml file.
3300
3301 The following code shows what the resulting plugin.xml file should look like.
3302
3303 <pre>
3304 <extension
3305 point="org.eclipse.linuxtools.tmf.core.analysis">
3306 <module
3307 id="my.lttng.kernel.analysis.id"
3308 name="My LTTng Kernel Analysis"
3309 analysis_module="my.plugin.package.MyLttngKernelAnalysis"
3310 automatic="true">
3311 <parameter
3312 name="myparam">
3313 </parameter>
3314 <parameter
3315 default_value="3"
3316 name="myotherparam">
3317 <tracetype
3318 class="org.eclipse.linuxtools.lttng2.kernel.core.trace.LttngKernelTrace">
3319 </tracetype>
3320 </module>
3321 </extension>
3322 </pre>
3323
3324 This defines an analysis module where the ''analysis_module'' attribute corresponds to the module class and must implement IAnalysisModule. This module has 2 parameters: ''myparam'' and ''myotherparam'' which has default value of 3. The ''tracetype'' element tells which tracetypes this analysis applies to. There can be many tracetypes. Also, the ''automatic'' attribute of the module indicates whether this analysis should be run when the trace is opened, or wait for the user's explicit request.
3325
3326 Note that with these extension points, it is possible to use the same module class for more than one analysis (with different ids and names). That is a desirable behavior. For instance, a third party plugin may add a new tracetype different from the one the module is meant for, but on which the analysis can run. Also, different analyses could provide different results with the same module class but with different default values of parameters.
3327
3328 == Attaching outputs and views to the analysis module ==
3329
3330 Analyses will typically produce outputs the user can examine. Outputs can be a text dump, a .dot file, an XML file, a view, etc. All output types must implement the '''IAnalysisOutput''' interface.
3331
3332 An output can be registered to an analysis module at any moment by calling the '''IAnalysisModule#registerOutput()''' method. Analyses themselves may know what outputs are available and may register them in the analysis constructor or after analysis completion.
3333
3334 The various concrete output types are:
3335
3336 * '''TmfAnalysisViewOutput''': It takes a view ID as parameter and, when selected, opens the view.
3337
3338 === Using the extension point to add outputs ===
3339
3340 Analysis outputs can also be hooked to an analysis using the same extension point ''org.eclipse.linuxtools.tmf.core.analysis'' in the plugin.xml file. Outputs can be matched either to a specific analysis identified by an ID, or to all analysis modules extending or implementing a given class or interface.
3341
3342 The following code shows how to add a view output to the analysis defined above directly in the plugin.xml file. This extension does not have to be in the same plugin as the extension defining the analysis. Typically, an analysis module can be defined in a core plugin, along with some outputs that do not require UI elements. Other outputs, like views, who need UI elements, will be defined in a ui plugin.
3343
3344 <pre>
3345 <extension
3346 point="org.eclipse.linuxtools.tmf.core.analysis">
3347 <output
3348 class="org.eclipse.linuxtools.tmf.ui.analysis.TmfAnalysisViewOutput"
3349 id="my.plugin.package.ui.views.myView">
3350 <analysisId
3351 id="my.lttng.kernel.analysis.id">
3352 </analysisId>
3353 </output>
3354 <output
3355 class="org.eclipse.linuxtools.tmf.ui.analysis.TmfAnalysisViewOutput"
3356 id="my.plugin.package.ui.views.myMoreGenericView">
3357 <analysisModuleClass
3358 class="my.plugin.package.core.MyAnalysisModuleClass">
3359 </analysisModuleClass>
3360 </output>
3361 </extension>
3362 </pre>
3363
3364 == Providing help for the module ==
3365
3366 For now, the only way to provide a meaningful help message to the user is by overriding the '''IAnalysisModule#getHelpText()''' method and return a string that will be displayed in a message box.
3367
3368 What still needs to be implemented is for a way to add a full user/developer documentation with mediawiki text file for each module and automatically add it to Eclipse Help. Clicking on the Help menu item of an analysis module would open the corresponding page in the help.
3369
3370 == Using analysis parameter providers ==
3371
3372 An analysis may have parameters that can be used during its execution. Default values can be set when describing the analysis module in the plugin.xml file, or they can use the '''IAnalysisParameterProvider''' interface to provide values for parameters. '''TmfAbstractAnalysisParamProvider''' provides an abstract implementation of this interface, that automatically notifies the module of a parameter change.
3373
3374 === Example parameter provider ===
3375
3376 The following example shows how to have a parameter provider listen to a selection in the LTTng kernel Control Flow view and send the thread id to the analysis.
3377
3378 <pre>
3379 public class MyLttngKernelParameterProvider extends TmfAbstractAnalysisParamProvider {
3380
3381 private ControlFlowEntry fCurrentEntry = null;
3382
3383 private static final String NAME = "My Lttng kernel parameter provider"; //$NON-NLS-1$
3384
3385 private ISelectionListener selListener = new ISelectionListener() {
3386 @Override
3387 public void selectionChanged(IWorkbenchPart part, ISelection selection) {
3388 if (selection instanceof IStructuredSelection) {
3389 Object element = ((IStructuredSelection) selection).getFirstElement();
3390 if (element instanceof ControlFlowEntry) {
3391 ControlFlowEntry entry = (ControlFlowEntry) element;
3392 setCurrentThreadEntry(entry);
3393 }
3394 }
3395 }
3396 };
3397
3398 /*
3399 * Constructor
3400 */
3401 public CriticalPathParameterProvider() {
3402 super();
3403 registerListener();
3404 }
3405
3406 @Override
3407 public String getName() {
3408 return NAME;
3409 }
3410
3411 @Override
3412 public Object getParameter(String name) {
3413 if (fCurrentEntry == null) {
3414 return null;
3415 }
3416 if (name.equals(MyLttngKernelAnalysis.PARAM1)) {
3417 return fCurrentEntry.getThreadId()
3418 }
3419 return null;
3420 }
3421
3422 @Override
3423 public boolean appliesToTrace(ITmfTrace trace) {
3424 return (trace instanceof LttngKernelTrace);
3425 }
3426
3427 private void setCurrentThreadEntry(ControlFlowEntry entry) {
3428 if (!entry.equals(fCurrentEntry)) {
3429 fCurrentEntry = entry;
3430 this.notifyParameterChanged(MyLttngKernelAnalysis.PARAM1);
3431 }
3432 }
3433
3434 private void registerListener() {
3435 final IWorkbench wb = PlatformUI.getWorkbench();
3436
3437 final IWorkbenchPage activePage = wb.getActiveWorkbenchWindow().getActivePage();
3438
3439 /* Add the listener to the control flow view */
3440 view = activePage.findView(ControlFlowView.ID);
3441 if (view != null) {
3442 view.getSite().getWorkbenchWindow().getSelectionService().addPostSelectionListener(selListener);
3443 view.getSite().getWorkbenchWindow().getPartService().addPartListener(partListener);
3444 }
3445 }
3446
3447 }
3448 </pre>
3449
3450 === Register the parameter provider to the analysis ===
3451
3452 To have the parameter provider class register to analysis modules, it must first register through the analysis manager. It can be done in a plugin's activator as follows:
3453
3454 <pre>
3455 @Override
3456 public void start(BundleContext context) throws Exception {
3457 /* ... */
3458 TmfAnalysisManager.registerParameterProvider("my.lttng.kernel.analysis.id", MyLttngKernelParameterProvider.class)
3459 }
3460 </pre>
3461
3462 where '''MyLttngKernelParameterProvider''' will be registered to analysis ''"my.lttng.kernel.analysis.id"''. When the analysis module is created, the new module will register automatically to the singleton parameter provider instance. Only one module is registered to a parameter provider at a given time, the one corresponding to the currently selected trace.
3463
3464 == Providing requirements to analyses ==
3465
3466 === Analysis requirement provider API ===
3467
3468 A requirement defines the needs of an analysis. For example, an analysis could need an event named ''"sched_switch"'' in order to be properly executed. The requirements are represented by the class '''TmfAnalysisRequirement'''. Since '''IAnalysisModule''' extends the '''IAnalysisRequirementProvider''' interface, all analysis modules must provide their requirements. If the analysis module extends '''TmfAbstractAnalysisModule''', it has the choice between overriding the requirements getter ('''IAnalysisRequirementProvider#getAnalysisRequirements()''') or not, since the abstract class returns an empty collection by default (no requirements).
3469
3470 === Requirement values ===
3471
3472 When instantiating a requirement, the developer needs to specify a type to which all the values added to the requirement will be linked. In the earlier example, there would be an ''"event"'' or ''"eventName"'' type. The type is represented by a string, like all values added to the requirement object. With an 'event' type requirement, a trace generator like the LTTng Control could automatically enable the required events. This is possible by calling the '''TmfAnalysisRequirementHelper''' class. Another point we have to take into consideration is the priority level of each value added to the requirement object. The enum '''TmfAnalysisRequirement#ValuePriorityLevel''' gives the choice between '''ValuePriorityLevel#MANDATORY''' and '''ValuePriorityLevel#OPTIONAL'''. That way, we can tell if an analysis can run without a value or not. To add values, one must call '''TmfAnalysisRequirement#addValue()'''.
3473
3474 Moreover, information can be added to requirements. That way, the developer can explicitly give help details at the requirement level instead of at the analysis level (which would just be a general help text). To add information to a requirement, the method '''TmfAnalysisRequirement#addInformation()''' must be called. Adding information is not mandatory.
3475
3476 === Example of providing requirements ===
3477
3478 In this example, we will implement a method that initializes a requirement object and return it in the '''IAnalysisRequirementProvider#getAnalysisRequirements()''' getter. The example method will return a set with two requirements. The first one will indicate the events needed by a specific analysis and the last one will tell on what domain type the analysis applies. In the event type requirement, we will indicate that the analysis needs a mandatory event and an optional one.
3479
3480 <pre>
3481 @Override
3482 public Iterable<TmfAnalysisRequirement> getAnalysisRequirements() {
3483 Set<TmfAnalysisRequirement> requirements = new HashSet<>();
3484
3485 /* Create requirements of type 'event' and 'domain' */
3486 TmfAnalysisRequirement eventRequirement = new TmfAnalysisRequirement("event");
3487 TmfAnalysisRequirement domainRequirement = new TmfAnalysisRequirement("domain");
3488
3489 /* Add the values */
3490 domainRequirement.addValue("kernel", TmfAnalysisRequirement.ValuePriorityLevel.MANDATORY);
3491 eventRequirement.addValue("sched_switch", TmfAnalysisRequirement.ValuePriorityLevel.MANDATORY);
3492 eventRequirement.addValue("sched_wakeup", TmfAnalysisRequirement.ValuePriorityLevel.OPTIONAL);
3493
3494 /* An information about the events */
3495 eventRequirement.addInformation("The event sched_wakeup is optional because it's not properly handled by this analysis yet.");
3496
3497 /* Add them to the set */
3498 requirements.add(domainRequirement);
3499 requirements.add(eventRequirement);
3500
3501 return requirements;
3502 }
3503 </pre>
3504
3505
3506 == TODO ==
3507
3508 Here's a list of features not yet implemented that would improve the analysis module user experience:
3509
3510 * Implement help using the Eclipse Help facility (without forgetting an eventual command line request)
3511 * The abstract class '''TmfAbstractAnalysisModule''' executes an analysis as a job, but nothing compels a developer to do so for an analysis implementing the '''IAnalysisModule''' interface. We should force the execution of the analysis as a job, either from the trace itself or using the TmfAnalysisManager or by some other mean.
3512 * Views and outputs are often registered by the analysis themselves (forcing them often to be in the .ui packages because of the views), because there is no other easy way to do so. We should extend the analysis extension point so that .ui plugins or other third-party plugins can add outputs to a given analysis that resides in the core.
3513 * Improve the user experience with the analysis:
3514 ** Allow the user to select which analyses should be available, per trace or per project.
3515 ** Allow the user to view all available analyses even though he has no imported traces.
3516 ** Allow the user to generate traces for a given analysis, or generate a template to generate the trace that can be sent as parameter to the tracer.
3517 ** Give the user a visual status of the analysis: not executed, in progress, completed, error.
3518 ** Give a small screenshot of the output as icon for it.
3519 ** Allow to specify parameter values from the GUI.
3520 * Add the possibility for an analysis requirement to be composed of another requirement.
3521 * Generate a trace session from analysis requirements.
3522
3523
3524 = Performance Tests =
3525
3526 Performance testing allows to calculate some metrics (CPU time, Memory Usage, etc) that some part of the code takes during its execution. These metrics can then be used as is for information on the system's execution, or they can be compared either with other execution scenarios, or previous runs of the same scenario, for instance, after some optimization has been done on the code.
3527
3528 For automatic performance metric computation, we use the ''org.eclipse.test.performance'' plugin, provided by the Eclipse Test Feature.
3529
3530 == Add performance tests ==
3531
3532 === Where ===
3533
3534 Performance tests are unit tests and they are added to the corresponding unit tests plugin. To separate performance tests from unit tests, a separate source folder, typically named ''perf'', is added to the plug-in.
3535
3536 Tests are to be added to a package under the ''perf'' directory, the package name would typically match the name of the package it is testing. For each package, a class named '''AllPerfTests''' would list all the performance tests classes inside this package. And like for unit tests, a class named '''AllPerfTests''' for the plug-in would list all the packages' '''AllPerfTests''' classes.
3537
3538 When adding performance tests for the first time in a plug-in, the plug-in's '''AllPerfTests''' class should be added to the global list of performance tests, found in package ''org.eclipse.linuxtools.lttng.alltests'', in class '''RunAllPerfTests'''. This will ensure that performance tests for the plug-in are run along with the other performance tests
3539
3540 === How ===
3541
3542 TMF is using the org.eclipse.test.performance framework for performance tests. Using this, performance metrics are automatically taken and, if many runs of the tests are run, average and standard deviation are automatically computed. Results can optionally be stored to a database for later use.
3543
3544 Here is an example of how to use the test framework in a performance test:
3545
3546 <pre>
3547 public class AnalysisBenchmark {
3548
3549 private static final String TEST_ID = "org.eclipse.linuxtools#LTTng kernel analysis";
3550 private static final CtfTmfTestTrace testTrace = CtfTmfTestTrace.TRACE2;
3551 private static final int LOOP_COUNT = 10;
3552
3553 /**
3554 * Performance test
3555 */
3556 @Test
3557 public void testTrace() {
3558 assumeTrue(testTrace.exists());
3559
3560 /** Create a new performance meter for this scenario */
3561 Performance perf = Performance.getDefault();
3562 PerformanceMeter pm = perf.createPerformanceMeter(TEST_ID);
3563
3564 /** Optionally, tag this test for summary or global summary on a given dimension */
3565 perf.tagAsSummary(pm, "LTTng Kernel Analysis", Dimension.CPU_TIME);
3566 perf.tagAsGlobalSummary(pm, "LTTng Kernel Analysis", Dimension.CPU_TIME);
3567
3568 /** The test will be run LOOP_COUNT times */
3569 for (int i = 0; i < LOOP_COUNT; i++) {
3570
3571 /** Start each run of the test with new objects to avoid different code paths */
3572 try (IAnalysisModule module = new LttngKernelAnalysisModule();
3573 LttngKernelTrace trace = new LttngKernelTrace()) {
3574 module.setId("test");
3575 trace.initTrace(null, testTrace.getPath(), CtfTmfEvent.class);
3576 module.setTrace(trace);
3577
3578 /** The analysis execution is being tested, so performance metrics
3579 * are taken before and after the execution */
3580 pm.start();
3581 TmfTestHelper.executeAnalysis(module);
3582 pm.stop();
3583
3584 /*
3585 * Delete the supplementary files, so next iteration rebuilds
3586 * the state system.
3587 */
3588 File suppDir = new File(TmfTraceManager.getSupplementaryFileDir(trace));
3589 for (File file : suppDir.listFiles()) {
3590 file.delete();
3591 }
3592
3593 } catch (TmfAnalysisException | TmfTraceException e) {
3594 fail(e.getMessage());
3595 }
3596 }
3597
3598 /** Once the test has been run many times, committing the results will
3599 * calculate average, standard deviation, and, if configured, save the
3600 * data to a database */
3601 pm.commit();
3602 }
3603 }
3604
3605 </pre>
3606
3607 For more information, see [http://wiki.eclipse.org/Performance/Automated_Tests The Eclipse Performance Test How-to]
3608
3609 Some rules to help write performance tests are explained in section [[ABC of performance testing | ABC of performance testing]].
3610
3611 === Run a performance test ===
3612
3613 Performance tests are unit tests, so, just like unit tests, they can be run by right-clicking on a performance test class and selecting ''Run As'' -> ''Junit Plug-in Test''.
3614
3615 By default, if no database has been configured, results will be displayed in the Console at the end of the test.
3616
3617 Here is the sample output from the test described in the previous section. It shows all the metrics that have been calculated during the test.
3618
3619 <pre>
3620 Scenario 'org.eclipse.linuxtools#LTTng kernel analysis' (average over 10 samples):
3621 System Time: 3.04s (95% in [2.77s, 3.3s]) Measurable effect: 464ms (1.3 SDs) (required sample size for an effect of 5% of mean: 94)
3622 Used Java Heap: -1.43M (95% in [-33.67M, 30.81M]) Measurable effect: 57.01M (1.3 SDs) (required sample size for an effect of 5% of stdev: 6401)
3623 Working Set: 14.43M (95% in [-966.01K, 29.81M]) Measurable effect: 27.19M (1.3 SDs) (required sample size for an effect of 5% of stdev: 6400)
3624 Elapsed Process: 3.04s (95% in [2.77s, 3.3s]) Measurable effect: 464ms (1.3 SDs) (required sample size for an effect of 5% of mean: 94)
3625 Kernel time: 621ms (95% in [586ms, 655ms]) Measurable effect: 60ms (1.3 SDs) (required sample size for an effect of 5% of mean: 39)
3626 CPU Time: 6.06s (95% in [5.02s, 7.09s]) Measurable effect: 1.83s (1.3 SDs) (required sample size for an effect of 5% of mean: 365)
3627 Hard Page Faults: 0 (95% in [0, 0]) Measurable effect: 0 (1.3 SDs) (required sample size for an effect of 5% of stdev: 6400)
3628 Soft Page Faults: 9.27K (95% in [3.28K, 15.27K]) Measurable effect: 10.6K (1.3 SDs) (required sample size for an effect of 5% of mean: 5224)
3629 Text Size: 0 (95% in [0, 0])
3630 Data Size: 0 (95% in [0, 0])
3631 Library Size: 32.5M (95% in [-12.69M, 77.69M]) Measurable effect: 79.91M (1.3 SDs) (required sample size for an effect of 5% of stdev: 6401)
3632 </pre>
3633
3634 Results from performance tests can be saved automatically to a derby database. Derby can be run either in embedded mode, locally on a machine, or on a server. More information on setting up derby for performance tests can be found here: [http://wiki.eclipse.org/Performance/Automated_Tests The Eclipse Performance Test How-to]. The following documentation will show how to configure an Eclipse run configuration to store results on a derby database located on a server.
3635
3636 Note that to store results in a derby database, the ''org.apache.derby'' plug-in must be available within your Eclipse. Since it is an optional dependency, it is not included in the target definition. It can be installed via the '''Orbit''' repository, in ''Help'' -> ''Install new software...''. If the '''Orbit''' repository is not listed, click on the latest one from [http://download.eclipse.org/tools/orbit/downloads/] and copy the link under ''Orbit Build Repository''.
3637
3638 To store the data to a database, it needs to be configured in the run configuration. In ''Run'' -> ''Run configurations..'', under ''Junit Plug-in Test'', find the run configuration that corresponds to the test you wish to run, or create one if it is not present yet.
3639
3640 In the ''Arguments'' tab, in the box under ''VM Arguments'', add on separate lines the following information
3641
3642 <pre>
3643 -Declipse.perf.dbloc=//javaderby.dorsal.polymtl.ca
3644 -Declipse.perf.config=build=mybuild;host=myhost;config=linux;jvm=1.7
3645 </pre>
3646
3647 The ''eclipse.perf.dbloc'' parameter is the url (or filename) of the derby database. The database is by default named ''perfDB'', with username and password ''guest''/''guest''. If the database does not exist, it will be created, initialized and populated.
3648
3649 The ''eclipse.perf.config'' parameter identifies a '''variation''': It typically identifies the build on which is it run (commitId and/or build date, etc), the machine (host) on which it is run, the configuration of the system (for example Linux or Windows), the jvm etc. That parameter is a list of ';' separated key-value pairs. To be backward-compatible with the Eclipse Performance Tests Framework, the 4 keys mentioned above are mandatory, but any key-value pairs can be used.
3650
3651 == ABC of performance testing ==
3652
3653 Here follow some rules to help design good and meaningful performance tests.
3654
3655 === Determine what to test ===
3656
3657 For tests to be significant, it is important to choose what exactly is to be tested and make sure it is reproducible every run. To limit the amount of noise caused by the TMF framework, the performance test code should be tweaked so that only the method under test is run. For instance, a trace should not be "opened" (by calling the ''traceOpened()'' method) to test an analysis, since the ''traceOpened'' method will also trigger the indexing and the execution of all applicable automatic analysis.
3658
3659 For each code path to test, multiple scenarios can be defined. For instance, an analysis could be run on different traces, with different sizes. The results will show how the system scales and/or varies depending on the objects it is executed on.
3660
3661 The number of '''samples''' used to compute the results is also important. The code to test will typically be inside a '''for''' loop that runs exactly the same code each time for a given number of times. All objects used for the test must start in the same state at each iteration of the loop. For instance, any trace used during an execution should be disposed of at the end of the loop, and any supplementary file that may have been generated in the run should be deleted.
3662
3663 Before submitting a performance test to the code review, you should run it a few times (with results in the Console) and see if the standard deviation is not too large and if the results are reproducible.
3664
3665 === Metrics descriptions and considerations ===
3666
3667 CPU time: CPU time represent the total time spent on CPU by the current process, for the time of the test execution. It is the sum of the time spent by all threads. On one hand, it is more significant than the elapsed time, since it should be the same no matter how many CPU cores the computer has. But since it calculates the time of every thread, one has to make sure that only threads related to what is being tested are executed during that time, or else the results will include the times of those other threads. For an application like TMF, it is hard to control all the threads, and empirically, it is found to vary a lot more than the system time from one run to the other.
3668
3669 System time (Elapsed time): The time between the start and the end of the execution. It will vary depending on the parallelisation of the threads and the load of the machine.
3670
3671 Kernel time: Time spent in kernel mode
3672
3673 Used Java Heap: It is the difference between the memory used at the beginning of the execution and at the end. This metric may be useful to calculate the overall size occupied by the data generated by the test run, by forcing a garbage collection before taking the metrics at the beginning and at the end of the execution. But it will not show the memory used throughout the execution.
This page took 0.164616 seconds and 5 git commands to generate.