-barectf
-=======
+# barectf
**barectf** is a command-line utility which generates pure C99
-code that is able to write native
-[CTF](http://git.efficios.com/?p=ctf.git;a=blob_plain;f=common-trace-format-specification.txt;hb=master)
-(the Common Trace Format) out of a pre-written CTF metadata file.
+code that is able to write native [Common Trace Format](http://diamon.org/ctf)
+(CTF) binary streams.
You will find barectf interesting if:
- 1. You need to trace a program.
- 2. You need tracing to be as fast as possible, but also very flexible:
+ 1. You need to trace an application.
+ 2. You need tracing to be efficient, yet flexible:
record integers of custom sizes, custom floating point numbers,
- enumerations mapped to a specific integer type, structure fields,
- NULL-terminated strings, static and dynamic arrays, etc.
+ enumerations supported by a specific integer type, and
+ null-terminated UTF-8/ASCII strings (C strings).
3. You need to be able to convert the recorded binary events to
human-readable text, as well as analyze them with Python scripts
([Babeltrace](http://www.efficios.com/babeltrace) does all that,
given a CTF input).
4. You _cannot_ use [LTTng](http://lttng.org/), an efficient tracing
framework for the Linux kernel and Linux/BSD user applications, which
- outputs CTF.
+ also outputs CTF.
The target audience of barectf is developers who need to trace bare metal
systems (without an operating system). The code produced by barectf
-is pure C99 and is lightweight enough to fit on a tiny microcontroller.
-Each event described in the CTF metadata input becomes one C function with
-one parameter mapped to one event field. CTF data is recorded in a buffer of
-any size provided by the user. This buffer corresponds to one CTF packet.
-The generated tracing functions report when the buffer is full. The user
-is entirely responsible for the buffering scheme: leave the buffer in memory,
-save it to some permanent storage, swap it with another empty buffer and
-concatenate recorded packets, etc.
+is pure C99 and can be lightweight enough to fit on a tiny microcontroller.
-barectf is written in Python 3 and currently uses
-[pytsdl](https://github.com/efficios/pytsdl) to parse the CTF metadata file
-provided by the user.
+**Key features**:
+ * Single input: easy-to-write [YAML](https://en.wikipedia.org/wiki/YAML)
+ configuration file (documentation below)
+ * 1-to-1 mapping from tracing function parameters to event fields
+ * Custom and bundled _platforms_ hiding the details of opening/closing
+ packets and writing them to a back-end (continuous tracing), getting
+ the clock values, etc.:
+ * _linux-fs_: basic Linux application tracing writing stream files to
+ the file system for demonstration purposes
+ * _parallella_: Adapteva Epiphany/[Parallella](http://parallella.org/)
+ with host-side consumer
+ * CTF metadata generated by the command-line tool (automatic trace UUID,
+ stream IDs, and event IDs)
+ * All basic CTF types are supported: integers, floating point numbers,
+ enumerations, and null-terminated strings (C strings)
+ * Binary streams produced by the generated C code and metadata file
+ produced by barectf are CTF 1.8-compliant
+ * Human-readable error reporting
-Installing
-----------
+**Current limitations**:
-Make sure you have `pip` for Python 3. On the latest Ubuntu releases,
-it is called `pip3`:
+As of this version:
+
+ * All the generated tracing C functions, for a given barectf
+ stream-specific context, need to be called from the same thread, and cannot
+ be called from an interrupt handler, unless a user-provided
+ synchronization mechanism is used.
+ * CTF compound types (array, sequence, structure, variant) are not supported
+ yet, except at some very specific locations in the metadata.
+
+barectf is written in Python 3.
+
+
+## Installing
+
+Make sure you have Python 3 and `pip` for Python 3 installed, then
+install barectf.
+
+Note that you may pass the `--user` argument to
+`pip install` to install the tool in your home directory (instead of
+installing globally).
+
+**Latest Ubuntu**:
sudo apt-get install python3-pip
+ sudo pip3 install barectf
-On Ubuntu 12.04, you need to install `setuptools` first, then use
-`easy_install3` to install `pip3`:
+**Ubuntu 12.04 and lower**:
sudo apt-get install python3-setuptools
sudo easy_install3 pip
+ sudo pip3 install barectf
-Install barectf:
+**Debian**:
+ sudo apt-get install python3-pip
sudo pip3 install barectf
+**Fedora 20 and up**:
-Using
------
+ sudo yum install python3-pip
+ sudo pip3 install barectf
-Using barectf involves:
+**Arch Linux**:
- 1. Writing the CTF metadata file describing the various headers,
- contexts and event fields.
- 2. Running the `barectf` command to generate C99 files out of
- the CTF metadata file.
- 3. Using the generated C code in your specific application.
+ sudo install python-pip
+ sudo pip install barectf
-The following subsections explain the three steps above.
+**OS X**
-Also, have a look at the [`doc/examples`](doc/examples) directory which
-contains a few complete examples.
+With [Homebrew](http://brew.sh/):
+ brew install python3
+ pip3 install barectf
-### Writing the CTF metadata
-The **Common Trace Format** is a specialized file format for recording
-trace data. CTF is designed to be very fast to write and very flexible.
-All headers, contexts and event fields written in binary files are
-described using a custom C-like, declarative language, TSDL (Trace
-Stream Description Language). The file containing this description is
-called the **CTF metadata**. The latter may be automatically generated
-by a tracer, like it is the case of LTTng, or written by hand. This
-metadata file is then used by CTF trace readers to know the layout of
-CTF binary files containing actual event contexts and fields.
+## What is CTF?
-The CTF metadata file contains several blocks describing various CTF
-binary layouts. A CTF trace file is a concatenation of several CTF
-packets. Here's the anatomy of a CTF packet:
+See the [CTF in a nutshell](http://diamon.org/ctf/#ctf-in-a-nutshell)
+section of CTF's website to understand the basics of this
+trace format.
-![CTF packet anatomy](doc/ctf-packet.png)
+The most important thing to understand about CTF, for barectf use
+cases, is the layout of a binary stream packet:
-A CTF packet belongs to a specific CTF stream. While the packet header
-is the same for all streams of a given CTF trace, everything else is
-specified per stream. Following this packet header is a packet context,
-and then actual recorded events. Each event starts with a mandatory
-header (same event header for all events of a given stream). The event
-header is followed by an optional event context with a layout shared
-by all events of a given stream. Then follows another optional event
-context, although this one has a layout specific to the event type.
-Finally, event fields are written.
+ * Packet header (defined at the trace level)
+ * Packet context (defined at the stream level)
+ * Sequence of events (defined at the stream level):
+ * Event header (defined at the stream level)
+ * Stream event context (defined at the stream level)
+ * Event context (defined at the event level)
+ * Event payload (defined at the event level)
-barectf asks you to write the CTF metadata by hand. Although its official
-[specification](http://git.efficios.com/?p=ctf.git;a=blob_plain;f=common-trace-format-specification.txt;hb=master)
-is thorough, you will almost always start from this template:
+The following diagram, stolen without remorse from CTF's website, shows
+said packet layout:
-```
-/* CTF 1.8 */
-
-/* a few useful standard integer aliases */
-typealias integer {size = 8; align = 8;} := uint8_t;
-typealias integer {size = 16; align = 16;} := uint16_t;
-typealias integer {size = 32; align = 32;} := uint32_t;
-typealias integer {size = 64; align = 64;} := uint64_t;
-typealias integer {size = 8; align = 8; signed = true;} := int8_t;
-typealias integer {size = 16; align = 16; signed = true;} := int16_t;
-typealias integer {size = 32; align = 32; signed = true;} := int32_t;
-typealias integer {size = 64; align = 64; signed = true;} := int64_t;
-
-/* IEEE 754 standard-precision floating point alias */
-typealias floating_point {
- exp_dig = 8;
- mant_dig = 24;
- align = 32;
-} := float;
-
-/* IEEE 754 double-precision floating point alias */
-typealias floating_point {
- exp_dig = 11;
- mant_dig = 53;
- align = 64;
-} := double;
-
-/* trace block */
-trace {
- /* CTF version 1.8; leave this as is */
- major = 1;
- minor = 8;
-
- /*
- * Native byte order (`le` or `be`). This is used by barectf to generate
- * the appropriate code when writing data to the packet.
- */
- byte_order = le;
-
- /*
- * Packet header. All packets (buffers) will have the same header.
- *
- * Special fields recognized by barectf (must appear in this order):
- *
- * magic: will be set to CTF's magic number (must be the first field)
- * (32-bit unsigned integer) (mandatory)
- * stream_id: will be set to the ID of the stream associated with
- * this packet (unsigned integer of your choice) (mandatory)
- */
- packet.header := struct {
- uint32_t magic;
- uint32_t stream_id;
- };
-};
-
-/* environment variables; you may add custom entries */
-env {
- domain = "bare";
- tracer_name = "barectf";
- tracer_major = 0;
- tracer_minor = 1;
- tracer_patchlevel = 0;
-};
-
-/* clock descriptor */
-clock {
- /* clock name */
- name = my_clock;
-
- /* clock frequency (Hz) */
- freq = 1000000000;
-
- /* optional clock value offset; offset from Epoch is: offset * (1 / freq) */
- offset = 0;
-};
-
-/* alias for integer used to hold clock cycles */
-typealias integer {
- size = 32;
-
- /* map to the appropriate clock using its name */
- map = clock.my_clock.value;
-} := my_clock_int_t;
-
-/*
- * A stream. You may have as many streams as you want. Events are unique
- * within their own stream. The main advantage of having multiple streams
- * is having different event headers, stream event contexts and stream
- * packet contexts for each one.
- */
-stream {
- /*
- * Mandatory stream ID (must fit the integer type of
- * `trace.packet.header.stream_id`).
- */
- id = 0;
-
- /*
- * Mandatory packet context. This structure follows the packet header
- * (see `trace.packet.header`) immediately in CTF binary streams.
- *
- * Special fields recognized by barectf:
- *
- * timestamp_begin: will be set to the current clock value when opening
- * the packet (same integer type as the clock's value)
- * timestamp_end: will be set to the current clock value when closing
- * the packet (same integer type as the clock's value)
- * content_size: will be set to the content size, in bits, of this
- * stream (unsigned 32-bit or 64-bit integer) (mandatory)
- * packet_size: will be set to the packet size, in bits, of this
- * stream (unsigned 32-bit or 64-bit integer) (mandatory)
- * events_discarded: if present, the barectf_close_packet() function of
- * this stream will accept an additional parameter to
- * specify the number of events that were discarded in
- * this stream _so far_ (free-running counter for the
- * whole stream)
- * cpu_id: if present, the barectf_open_packet() function of
- * this stream will accept an additional parameter to
- * specify the ID of the CPU associated with this stream
- * (a given stream should only be written to by a
- * specific CPU) (unsigned integer of your choice)
- *
- * `timestamp_end` must be present if `timestamp_begin` exists.
- */
- packet.context := struct {
- my_clock_int_t timestamp_begin;
- my_clock_int_t timestamp_end;
- uint64_t content_size;
- uint64_t packet_size;
- uint32_t cpu_id;
- };
+![](http://diamon.org/ctf/img/ctf-stream-packet.png)
- /*
- * Mandatory event header. All events recorded in this stream will start
- * with this structure.
- *
- * Special fields recognized by barectf:
- *
- * id: will be filled by the event ID corresponding to a tracing
- * function (unsigned integer of your choice)
- * timestamp: will be filled by the current clock's value (same integer
- * type as the clock's value)
- */
- event.header := struct {
- uint32_t id;
- my_clock_int_t timestamp;
- };
+Any of those six dynamic scopes, if defined at all, has an associated
+CTF type. barectf requires them to be structure types.
- /*
- * Optional stream event context (you may remove the whole block or leave
- * the structure empty if you don't want any). This structure follows the
- * event header (see `stream.event.header`) immediately in CTF binary
- * streams.
- */
- event.context := struct {
- int32_t _some_stream_event_context_field;
- };
-};
-
-/*
- * An event. Events have an ID, a name, an optional context and fields. An
- * event is associated to a specific stream using its stream ID.
- */
-event {
- /*
- * Mandatory event name. This is used by barectf to generate the suffix
- * of this event's corresponding tracing function, so make sure it follows
- * the C identifier syntax even though it's a quoted string here.
- */
- name = "my_event";
-
- /*
- * Mandatory event ID (must fit the integer type of in
- * `stream.event.header.id` of the associated stream).
- */
- id = 0;
-
- /* ID of the stream in which this event will be recorded */
- stream_id = 0;
-
- /*
- * Optional event context (you may remove the whole block or leave the
- * structure empty if you don't want one). This structure follows the
- * stream event context (if it exists) immediately in CTF binary streams.
- */
- context := struct {
- int32_t _some_event_context_field;
- };
- /*
- * Mandatory event fields (although the structure may be left empty if this
- * event has no fields). This structure follows the event context (if it
- * exists) immediately in CTF binary streams.
- */
- fields := struct {
- uint32_t _a;
- uint32_t _b;
- uint16_t _c;
- string _d;
- };
-};
-```
+## Using
-The top `/* CTF 1.8 */` is actually needed right there, and as is, since it
-acts as a CTF metadata magic number for CTF readers.
+Using barectf involves the following steps:
-Only one stream and one event (belonging to this single stream) are described
-in this template, but you may add as many as you need.
+ 1. Writing the YAML configuration file defining the various header,
+ context, and event field types.
+ 2. Running the `barectf` command-line tool with this configuration file
+ to generate the CTF metadata and C files.
+ 3. Using the generated C code (tracing functions), along with the C code
+ provided by the appropriate barectf platform, in the source code of
+ your own application.
+ 4. Running your application, along with anything the barectf platform
+ you chose requires, to generate the binary streams of a CTF trace.
-The following subsections describe the features of CTF metadata supported
-by barectf.
+Your application, when running, will generate CTF packets. Depending
+on the chosen barectf platform, those packets will be consumed and
+sequentially written at some place for later viewing/analysis.
+Here's a diagram summarizing the steps described above:
-#### Types
+![](http://0x3b.org/ss/cardiectasis400.png)
-The supported structure field types are:
+The following subsections explain the four steps above.
- * **integers** of any size (64-bit and less), any alignment (power of two)
- * **floating point numbers** of any total size (64-bit and less), any
- alignment (power of two)
- * NULL-terminated **strings** of bytes
- * **enumerations** associated with a specific integer type
- * **static** and **dynamic arrays** of any type
- * **structures** containing only integers, floating point numbers,
- enumerations and _static_ arrays
+Also, have a look at the [`doc/examples`](doc/examples) directory, which
+contains complete examples.
-CTF also supports _variants_ (dynamic selection between different types),
-but barectf **does not**. Any detected variant will throw an error when
-running `barectf`.
+### Writing the YAML configuration file
-##### Integers
+The barectf [YAML](https://en.wikipedia.org/wiki/YAML) configuration file
+is the only input the `barectf` command-line tool needs in order to generate
+the corresponding CTF metadata and C files.
-CTF integers are defined like this:
+To start with a concrete configuration, here's some minimal configuration:
+```yaml
+version: '2.0'
+metadata:
+ type-aliases:
+ uint16:
+ class: int
+ size: 16
+ trace:
+ byte-order: le
+ streams:
+ my_stream:
+ packet-context-type:
+ class: struct
+ fields:
+ packet_size: uint16
+ content_size: uint16
+ events:
+ my_event:
+ payload-type:
+ class: struct
+ fields:
+ my_field:
+ class: int
+ size: 8
```
-integer {
- /* mandatory size in bits (64-bit and less) */
- size = 16;
-
- /*
- * Optional alignment in bits (power of two). Default is 8 when the
- * size is a multiple of 8, and 1 otherwise.
- */
- align = 16;
-
- /* optional signedness (`true` or `false`); default is unsigned */
- signed = true;
-
- /*
- * Optional byte order (`le`, `be`, `native` or `network`). `native`
- * will use the byte order specified by the `trace.byte_order`.
- * Default is `native`.
- */
- byte_order = le;
-
- /*
- * Optional display base, used to display the integer value when
- * reading the trace. Valid values are 2 (or `binary`, `bin` and `b`),
- * 8 (or `o`, `oct` or `octal`), 10 (or `u`, `i`, `d`, `dec` or
- * `decimal`), and 16 (or `x`, `X`, `p`, `hex` or `hexadecimal`).
- * Default is 10.
- */
- base = hex;
-
- /*
- * Encoding (if this integer represents a character). Valid values
- * are `none`, `UTF8` and `ASCII`. Default is `none`.
- */
- encoding = UTF8;
-}
-```
-The size (the only mandatory property) does _not_ have to be a power of two:
+The `version` property must be set to the `2.0` _string_ (hence the single
+quotes). As features are added to barectf and to its configuration file schema,
+this version will be bumped accordingly.
+
+The `metadata` property is where the properties and layout of the
+eventual CTF trace are defined. The accepted properties of each object
+are documented later in this document. For the moment, note simply
+that the native byte order of the trace is set to `le` (little-endian),
+and that there's one defined stream named `my_stream`, having one
+defined event named `my_event`, having a structure as its payload
+type, with a single 8-bit unsigned integer type field named `my_field`. Also,
+the stream packet context type is a structure defining the mandatory
+`packet_size` and `content_size` special fields as 16-bit unsigned integer
+types.
+Running `barectf` with the configuration above (as a file named `config.yaml`):
+
+ barectf config.yaml
+
+will produce a C file (`barectf.c`), and its header file (`barectf.h`),
+the latter declaring the following function:
+
+```c
+void barectf_my_stream_trace_my_event(
+ struct barectf_my_stream_ctx *ctx, uint8_t ep_my_field);
```
-integer {size = 23;}
+
+`ctx` is the barectf context for the stream named `my_stream` (usually
+initialized and provided by the barectf platform), and `ep_my_field` is the
+value of the `my_event` event payload's `my_field` field.
+
+The following subsections define all the objects of the YAML configuration
+file.
+
+
+#### Configuration object
+
+The top-level object of the YAML configuration file.
+
+**Properties**:
+
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `version` | String | Must be set to `'2.0'` | Required | N/A |
+| `prefix` | String | Prefix to be used for function names, file names, etc. | Optional | `barectf_` |
+| `metadata` | [Metadata object](#metadata-object) | Trace metadata | Required | N/A |
+
+The `prefix` property must be set to a valid C identifier. It can be
+overridden by the `barectf` command-line tool's `--prefix` option.
+
+**Example**:
+
+```yaml
+version: '2.0'
+prefix: axx_
+metadata:
+ type-aliases:
+ uint16:
+ class: int
+ size: 16
+ trace:
+ byte-order: le
+ streams:
+ my_stream:
+ packet-context-type:
+ class: struct
+ fields:
+ packet_size: uint16
+ content_size: uint16
+ events:
+ my_event:
+ payload-type:
+ class: struct
+ fields:
+ a:
+ class: int
+ size: 8
```
-is perfectly valid.
-A CTF integer field will make barectf produce a corresponding C integer
-function parameter with an appropriate size. For example, the 23-bit integer
-above would produce an `uint32_t` parameter (of which only the first 23
-least significant bits will be written to the trace), while the first
-16-bit one will produce an `int16_t` parameter.
+#### Metadata object
+
+A metadata object defines the desired layout of the CTF trace to be
+produced by the generated C code. It is used by barectf to generate C code,
+as well as a corresponding CTF metadata file.
+
+**Properties**:
+
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `type-aliases` | Associative array of strings (alias names) to [type objects](#type-objects) or strings (previous alias names) | Type aliases to be used in trace, stream, and event objects | Optional | `{}` |
+| `log-levels` | Associative array of strings (log level names) to log level constant integers | Log levels to be used in event objects | Optional | `{}` |
+| `clocks` | Associative array of strings (clock names) to [clock objects](#clock-object) | Trace clocks | Optional | `{}` |
+| `env` | Associative array of strings (names) to strings or integers (values) | Trace environment variables | Optional | `{}` |
+| `trace` | [Trace object](#trace-object) | Metadata common to the whole trace | Required | N/A |
+| `streams` | Associative array of strings (stream names) to [stream objects](#stream-object) | Trace streams | Required | N/A |
+
+Each clock name of the `clocks` property must be a valid C identifier.
+
+The `streams` property must contain at least one entry. Each stream name must be
+a valid C identifier.
+
+Each environment variable name in the `env` property must be a valid
+C identifier. Those variables will be appended to some environment
+variables set by barectf itself.
+
+The order of the `type-aliases` entries is important: a type alias may only
+inherit from another type alias if the latter is defined before.
+
+**Example**:
+
+```yaml
+type-aliases:
+ uint8:
+ class: integer
+ size: 8
+ uint16:
+ class: integer
+ size: 16
+ uint32:
+ class: integer
+ size: 32
+ uint64:
+ class: integer
+ size: 64
+ clock-int:
+ inherit: uint64
+ property-mappings:
+ - type: clock
+ name: my_clock
+ property: value
+ byte: uint8
+ uuid:
+ class: array
+ length: 16
+ element-type: byte
+log-levels:
+ emerg: 0
+ alert: 1
+ critical: 2
+ error: 3
+ warning: 4
+ notice: 5
+ info: 6
+clocks:
+ my_clock:
+ freq: 1000000000
+ offset:
+ seconds: 1434072888
+ return-ctype: uint64_t
+env:
+ my_system_version: '0.3.2-2015.03'
+ bID: 15
+trace:
+ byte-order: le
+ uuid: auto
+ packet-header-type:
+ class: struct
+ min-align: 8
+ fields:
+ magic: uint32
+ uuid: uuid
+ stream_id: uint8
+streams:
+ my_stream:
+ packet-context-type:
+ class: struct
+ fields:
+ timestamp_begin: clock-int
+ timestamp_end: clock-int
+ packet_size: uint32
+ something: float
+ content_size: uint32
+ events_discarded: uint32
+ event-header-type:
+ class: struct
+ fields:
+ timestamp: clock-int
+ id: uint16
+ events:
+ simple_uint32:
+ log-level: error
+ payload-type:
+ class: struct
+ fields:
+ value: uint32
+ simple_int16:
+ payload-type:
+ class: struct
+ fields:
+ value:
+ inherit: uint16
+ signed: true
+```
-The `integer` block also accepts a `map` property which is only used
-when defining the integer used to carry the value of a specified
-clock. You may always follow the example above.
+#### Clock object
-##### Floating point numbers
+A CTF clock.
-CTF floating point numbers are defined like this:
+**Properties**:
-```
-floating_point {
- /* exponent size in bits */
- exp_dig = 8;
-
- /* mantissa size in bits */
- mant_dig = 24;
-
- /*
- * Optional alignment (power of two). Default is 8 when the total
- * size (exponent + mantissa) is a multiple of 8, and 1 otherwise.
- */
- align = 32;
-
- /*
- * Optional byte order (`le`, `be`, `native` or `network`). `native`
- * will use the byte order specified by the `trace.byte_order`.
- * Default is `native`.
- */
- byte_order = le;
-}
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `freq` | Integer (positive) | Frequency (Hz) | Optional | 1000000000 |
+| `description` | String | Description | Optional | No description |
+| `uuid` | String (UUID canonical format) | UUID (unique identifier of this clock) | Optional | No UUID |
+| `error-cycles` | Integer (zero or positive) | Error (uncertainty) of clock in clock cycles | Optional | 0 |
+| `offset` | [Clock offset object](#clock-offset-object) | Offset | Optional | Default clock offset object |
+| `absolute` | Boolean | Absolute clock | Optional | `false` |
+| `return-ctype` | String | Return C type of the associated clock callback | Optional | `uint32_t` |
+
+The `return-ctype` property must be set to a valid C integer type
+(or valid type definition). This is not currently validated by barectf
+itself, but the C compiler will fail to compile the generated C code
+if the clock's return type is not a valid C integer type.
+
+**Example**:
+
+```yaml
+freq: 2450000000
+description: CCLK/A2 (System clock, A2 clock domain)
+uuid: 184883f6-6b6e-4bfd-bcf7-1e45c055c56a
+error-cycles: 23
+offset:
+ seconds: 1434072888
+ cycles: 2003912
+absolute: false
+return-ctype: unsigned long long
```
-If a CTF floating point number is defined with an 8-bit exponent, a 24-bit
-mantissa and a 32-bit alignment, its barectf C function parameter type will
-be `float`. It will be `double` for an 11-bit exponent, 53-bit mantissa
-and 64-bit aligned CTF floating point number. Any other configuration
-will produce a `uint64_t` function parameter (you will need to cast your
-custom floating point number to this when calling the tracing function).
+##### Clock offset object
-##### Strings
+An offset in seconds and clock cycles from the Unix epoch.
-CTF strings are pretty simple to define:
+**Properties**:
-```
-string
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `seconds` | Integer (zero or positive) | Seconds since the Unix epoch | Optional | 0 |
+| `cycles` | Integer (zero or positive) | Clock cycles since the Unix epoch plus the value of the `seconds` property | Optional | 0 |
+
+**Example**:
+
+```yaml
+seconds: 1435617321
+cycles: 194570
```
-They may also have an encoding property:
+#### Trace object
+
+Metadata common to the whole trace.
+
+**Properties**:
+
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `byte-order` | String | Native byte order (`le` for little-endian or `be` for big-endian) | Required | N/A |
+| `uuid` | String (UUID canonical format or `auto`) | UUID (unique identifier of this trace); automatically generated if value is `auto` | Optional | No UUID |
+| `packet-header-type` | [Type object](#type-objects) or string (alias name) | Type of packet header (must be a [structure type object](#structure-type-object)) | Optional | No packet header |
+
+Each field of the packet header structure type (`packet-header-type` property)
+corresponds to one parameter
+of the generated packet opening function (prefixed with `tph_`), except for the
+following special fields, which are automatically written if present:
+
+ * `magic` (32-bit unsigned [integer type object](#integer-type-object)):
+ packet magic number
+ * `uuid` ([array type object](#array-type-object) of 8-bit unsigned
+ [integer type objects](#integer-type-object), of length 16):
+ trace UUID (`uuid` property of trace object must be set)
+ * `stream_id` (unsigned [integer type object](#integer-type-object)):
+ stream ID
+
+As per CTF 1.8, the `stream_id` field is mandatory if there's more
+than one defined stream.
+
+**Example**:
+
+```yaml
+byte-order: le
+uuid: auto
+packet-header-type:
+ class: struct
+ fields:
+ magic: uint32
+ uuid:
+ class: array
+ length: 16
+ element-type: uint8
+ stream_id: uint16
```
-string {
- /* encoding: `none`, `UTF8` or `ASCII`; default is `none` */
- encoding = UTF8;
-}
+
+
+#### Stream object
+
+A CTF stream.
+
+**Properties**:
+
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `packet-context-type` | [Type object](#type-objects) or string (alias name) | Type of packet context (must be a [structure type object](#structure-type-object)) | Required | N/A |
+| `event-header-type` | [Type object]((#type-objects)) or string (alias name) | Type of event header (must be a [structure type object](#structure-type-object)) | Optional | No event header |
+| `event-context-type` | [Type object]((#type-objects)) or string (alias name) | Type of stream event context (must be a [structure type object](#structure-type-object)) | Optional | No stream event context |
+| `events` | Associative array of event names (string) to [event objects](#event-object) | Stream events | Required | N/A |
+
+Each field of the packet context structure type (`packet-context-type` property)
+corresponds to one parameter
+of the generated packet opening function (prefixed with `spc_`), except for the
+following special fields, which are automatically written if present:
+
+ * `timestamp_begin` and `timestamp_end` (unsigned
+ [integer type objects](#integer-type-object), with
+ a clock value property mapping): resp. open and close timestamps
+ * `packet_size` (unsigned [integer type object](#integer-type-object),
+ mandatory): packet size
+ * `content_size` (unsigned [integer type object](#integer-type-object),
+ mandatory): content size
+ * `events_discarded` (unsigned [integer type object](#integer-type-object)):
+ number of discarded events so far
+
+The `timestamp_end` field must exist if the `timestamp_begin` field exists,
+and vice versa.
+
+Each field of the event header structure type (`event-header-type` property)
+corresponds to one parameter of the generated tracing function
+(prefixed with `eh_`) (for a given event), except for the following special
+fields, which are automatically written if present:
+
+ * `id` (unsigned [integer type object](#integer-type-object)): event ID
+ * `timestamp` (unsigned [integer type object](#integer-type-object), with
+ a clock value property mapping): event timestamp
+
+The `id` field must exist if there's more than one defined event in the
+stream.
+
+Each field of the stream event context structure type (`event-context-type`
+property) corresponds to one parameter of the generated tracing function
+(prefixed with `seh_`) (for a given event).
+
+Each field name of the `packet-context-type`, `event-header-type`,
+and `event-context-type` properties must be a valid C identifier.
+
+The `events` property must contain at least one entry.
+
+**Example**:
+
+```yaml
+packet-context-type:
+ class: struct
+ fields:
+ timestamp_begin: clock-int
+ timestamp_end: clock-int
+ packet_size: uint32
+ content_size: uint32
+ events_discarded: uint16
+ my_custom_field: int12
+event-header-type:
+ class: struct
+ fields:
+ id: uint16
+ timestamp: clock-int
+event-context-type:
+ class: struct
+ fields:
+ obj_id: uint8
+events:
+ msg_in:
+ payload-type: msg-type
```
-CTF strings are always byte-aligned.
-A CTF string field will make barectf produce a corresponding C function
-parameter of type `const char*`. Bytes will be copied from this pointer
-until a byte of value 0 is found (which will also be written to the
-buffer to mark the end of the recorded string).
+#### Event object
+A CTF event.
-##### Enumerations
+**Properties**:
-CTF enumerations associate labels to ranges of integer values. They
-are a great way to trace named states using an integer. Here's an
-example:
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `log-level` | String (predefined log level name) or integer (zero or positive) | Log level of this event | Optional | No log level |
+| `context-type` | [Type object](#type-objects) or string (alias name) | Type of event context (must be a [structure type object](#structure-type-object)) | Optional | No event context |
+| `payload-type` | [Type object](#type-objects) or string (alias name) | Type of event payload (must be a [structure type object](#structure-type-object)) | Required | N/A |
-```
-enum : uint32_t {
- ZERO,
- ONE,
- TWO,
- TEN = 10,
- ELEVEN,
- "label with spaces",
- RANGE = 23 ... 193
-}
+Available log level names, for a given event, are defined by the
+`log-levels` property of the [metadata object](#metadata-object)
+containing it.
+
+Each field of the event context structure type (`context-type` property)
+corresponds to one parameter
+of the generated tracing function (prefixed with `ec_`).
+
+Each field of the event payload structure type (`payload-type` property)
+corresponds to one parameter
+of the generated tracing function (prefixed with `ep_`). The event
+payload structure type must contain at least one field.
+
+Each field name of the `context-type` and `payload-type` properties must be a
+valid C identifier.
+
+**Example**:
+
+```yaml
+log-level: error
+context-type:
+ class: struct
+ fields:
+ msg_id: uint16
+payload-type:
+ class: struct
+ fields:
+ src:
+ type: string
+ dst:
+ type: string
+ payload_sz: uint32
```
-Unless the first entry specifies a value, CTF enumerations are
-always started at 0. They work pretty much like their C counterpart,
-although they support ranges and literal strings as labels.
-CTF enumerations are associated with a CTF integer type (`uint32_t`
-above). This identifier must be an existing integer type alias.
+#### Type objects
-A CTF enumeration field will make barectf produce a corresponding C
-integer function parameter compatible with the associated CTF integer type.
+Type objects represent CTF types.
+**Common properties**:
-##### Static arrays
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `class` | String | Type class | Required if `inherit` property is absent | N/A |
+| `inherit` | String | Name of type alias from which to inherit properties | Required if `class` property is absent | N/A |
-Structure field names may be followed by a subscripted constant to
-define a static array of the field type:
+The accepted values for the `class` property are:
-```
-struct {
- integer {size = 16;} _field[10];
-}
-```
+| `class` property value | CTF type |
+|---|---|
+| `int`<br>`integer` | Integer type |
+| `flt`<br>`float`<br>`floating-point` | Floating point number type |
+| `enum`<br>`enumeration` | Enumeration type |
+| `str`<br>`string` | String type |
+| `struct`<br>`structure` | Structure type |
+| `array` | Array/sequence types |
+| `var`<br>`variant` | Variant type |
-In the above structure, `_field` is a static array of ten 16-bit integers.
+The `inherit` property accepts the name of any previously defined
+type alias. Any propery in a type object that inherits from another
+type object overrides the parent properties as follows:
-A CTF static array field will make barectf produce a `const void*` C function
-parameter. Bytes will be copied from this pointer to match the total static
-array size. In the example above, the integer size is 16-bit, thus its
-default alignment is 8-bit, so 20 bytes would be copied.
+ * Booleans, numbers, and strings: value of parent property with
+ the same name is replaced
+ * Arrays: new elements are appended to parent array
+ * Associative arrays: properties sharing the name of parent
+ properties completely replace them; new properties are
+ added to the parent associative array
-The inner element of a CTF static array _must be at least byte-aligned_
-(8-bit), either by forcing its alignment, or by ensuring it manually
-when placing fields one after the other. This means the following static
-array is valid for barectf:
-```
-struct {
- /* ... */
- integer {size = 5;} _field[10];
-}
-```
+##### Integer type object
-as long as the very first 5-bit, 1-bit aligned integer element starts
-on an 8-bit boundary.
+A CTF integer type.
+**Properties**:
-##### Dynamic arrays
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `size` | Integer (positive) | Size (bits) (1 to 64) | Required | N/A |
+| `align` | Integer (positive) | Alignment (bits) (power of two) | Optional | 8 if `size` property is a multiple of 8, else 1 |
+| `signed` | Boolean | Signedness | Optional | `false` (unsigned) |
+| `base` | Integer | Display radix (2, 8, 10, or 16) | Optional | 10 |
+| `byte-order` | String | Byte order (`le` for little-endian, `be` for big-endian, or `native` to use the byte order defined at the trace level) | Optional | `native` |
+| `property-mappings` | Array of [property mapping objects](#property-mapping-object) | Property mappings of this integer type | Optional | N/A |
-Just like static arrays, dynamic arrays are defined using a subscripted
-length, albeit in this case, this length refers to another field using
-the dot notation. Dynamic arrays are called _sequences_ in the CTF
-specification.
+The `property-mappings` array property currently accepts only one element.
-Here's an example:
+**Example**:
+```yaml
+class: int
+size: 12
+signed: false
+base: 8
+byte-order: le
+property-mappings:
+ - type: clock
+ name: my_clock
+ property: value
```
-struct {
- uint32_t _length;
- integer {size = 16;} _field[_length];
-}
-```
-In the above structure, `_field` is a dynamic array of `_length`
-16-bit integers.
+**Equivalent C type**:
+
+ * Unsigned: `uint8_t`, `uint16_t`, `uint32_t`, or `uint64_t`, depending on the
+ `size` property
+ * Signed: `int8_t`, `int16_t`, `int32_t`, or `int64_t`, depending on the
+ `size` property
+
+
+###### Property mapping object
+
+A property mapping object associates an integer type with a stateful
+object's property. When the integer type is decoded from a CTF binary
+stream, the associated object's property is updated.
+
+Currently, the only available stateful object's property is the
+current value of a given clock.
-There are various scopes to which a dynamic array may refer:
+**Properties**:
- * no prefix: previous field in the same structure, or in parent
- structures until found
- * `event.fields.` prefix: field of the event fields
- * `event.context.` prefix: field of the event context if it exists
- * `stream.event.context.` prefix: field of the stream event context
- if it exists
- * `stream.event.header.` prefix: field of the event header
- * `stream.packet.context.` prefix: field of the packet context
- * `trace.packet.header.` prefix: field of the packet header
- * `env.` prefix: static property of the environment block
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `type` | String | Object type (always `clock`) | Required | N/A |
+| `name` | String | Clock name | Required | N/A |
+| `property` | String | Clock property name (always `value`) | Required | N/A |
-Here's another, more complex example:
+**Example**:
+```yaml
+type: clock
+name: my_clock
+property: value
```
-struct {
- uint32_t _length;
- string _other_field[stream.event.context.length];
- float _static_array_of_dynamic_arrays[10][_length];
-}
+
+
+##### Floating point number type object
+
+A CTF floating point number type.
+
+**Properties**:
+
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `size` | [Floating point number type size object](#floating-point-number-type-size-object) | Size parameters | Required | N/A |
+| `align` | Integer (positive) | Alignment (bits) (power of two) | Optional | 8 |
+| `byte-order` | String | Byte order (`le` for little-endian, `be` for big-endian, or `native` to use the byte order defined at the trace level) | Optional | `native` |
+
+**Example**:
+
+```yaml
+class: float
+size:
+ exp: 11
+ mant: 53
+align: 64
+byte-order: be
```
-The above examples also demonstrates that dynamic arrays and static
-arrays may contain eachother. `_other_field` is a dynamic array of
-`stream.event.context.length` strings. `_static_array_of_dynamic_arrays`
-is a static array of 10 dynamic arrays of `_length` floating point
-numbers. This syntax follows the C language.
+**Equivalent C type**:
-A CTF dynamic array field will make barectf produce a `const void*` C function
-parameter. Bytes will be copied from this pointer to match the
-total dynamic array size. The previously recorded length will be
-found automatically (always an offset from the beginning of the
-stream packet, or from the beginning of the current event).
+ * 8-bit exponent, 24-bit mantissa, 32-bit alignment: `float`
+ * 11-bit exponent, 53-bit mantissa, 64-bit alignment: `double`
+ * Every other combination: `uint64_t`
-barectf has a few limitations concerning dynamic arrays:
- * The inner element of a CTF dynamic array _must be at least byte-aligned_
- (8-bit), either by forcing its alignment, or by ensuring it manually
- when placing fields one after the other.
- * The length type must be a 32-bit, byte-aligned unsigned integer
- with a native byte order.
+###### Floating point number type size object
+The CTF floating point number type is encoded, in a binary stream,
+following [IEEE 754-2008](https://en.wikipedia.org/wiki/IEEE_floating_point)'s
+interchange format. The required parameters are the exponent and
+significand sizes, in bits. In CTF, the _mantissa_ size includes the
+sign bit, whereas IEEE 754-2008's significand size does not include it.
-##### Structures
+**Properties**:
-Structures contain fields associating a name to a type. The fields
-are recorded in the specified order within the CTF binary stream.
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `exp` | Integer (positive) | Exponent size (bits) | Required | N/A |
+| `mant` | Integer (positive) | Mantissa size (significand size + 1) (bits) | Required | N/A |
-Here's an example:
+As per IEEE 754-2008, the sum of the `exp` and `mant` properties must be a
+multiple of 32.
-```
-struct {
- uint32_t _a;
- int16_t _b;
- string {encoding = ASCII;} _c;
-}
-```
+The sum of the `exp` and `mant` properties must be lesser than or equal to 64.
-The default alignment of a structure is the largest alignment amongst
-its fields. For example, the following structure has a 32-bit alignment:
+**Example**:
-```
-struct {
- uint16_t _a; /* alignment: 16 */
- struct { /* alignment: 32 */
- uint32_t _a; /* alignment: 32 */
- string; _b; /* alignment: 8 */
- } _b;
- integer {size = 64;} _c; /* alignment: 8 */
-}
+```yaml
+exp: 8
+mant: 24
```
-This default alignment may be overridden using a special `align()`
-option after the structure is closed:
-```
-struct {
- uint16_t _a;
- struct {
- uint32_t _a;
- string; _b;
- } _b;
- integer {size = 64;} _c;
-} align(16)
-```
+##### Enumeration type object
-You may use structures as field types, although they must have a
-_known size_ when running barectf. This means they cannot contain
-sequences or strings.
+A CTF enumeration type.
-A CTF structure field will make barectf produce a `const void*` C function
-parameter. The structure (of known size) will be copied as is to the
-current buffer, respecting its alignment.
+Each label of an enumeration type is mapped to a single value, or to a
+range of values.
-Note that barectf requires inner structures to be at least byte-aligned.
+**Properties**:
-Be careful when using CTF structures for recording binary structures
-declared in C. You need to make sure your C compiler aligns structure
-fields and adds padding exactly in the way you define your equivalent
-CTF structure. For example, using GCC on the x86 architecture, 3 bytes
-are added after field `a` in the following C structure since `b` is
-32-bit aligned:
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `value-type` | [Integer type object](#integer-type-object) or string (alias name) | Supporting integer type | Required | N/A |
+| `members` | Array of [enumeration type member objects](#enumeration-type-member-object) | Enumeration members | Required | N/A |
-```c
-struct my_struct {
- char a;
- unsigned int b;
-};
-```
+The `members` property must contain at least one element. If the member
+is a string, its associated value is computed as follows:
-It would be wrong to use the following CTF structure:
+ * If the member is the first one of the `members` array, its value
+ is 0.
+ * If the previous member is a string, its value is the previous
+ member's computed value + 1.
+ * If the previous member is a single value member, its value is
+ the previous member's value + 1.
+ * If the previous member is a range member, its value is the previous
+ member's upper bound + 1.
-```
-struct {
- integer {size = 8; signed = true;} a;
- integer {size = 32;} b;
-}
-```
+The member values must not overlap each other.
-since field `b` is byte-aligned by default. This one would work fine:
+**Example**:
-```
-struct {
- integer {size = 8; signed = true;} a;
- integer {size = 32; align = 32;} b;
-}
+```yaml
+class: enum
+value-type: uint8
+members:
+ - ZERO
+ - ONE
+ - TWO
+ - label: SIX
+ value: 6
+ - SE7EN
+ - label: TWENTY TO FOURTY
+ value: [10, 40]
+ - FORTY-ONE
```
-CTF structures can prove very useful for recording protocols with named
-fields when reading the trace. For example, here's the CTF structure
-describing the IPv4 header (excluding options):
+**Equivalent C type**: equivalent C type of supporting integer type
+(see [integer type object documentation](#integer-type-object) above).
-```
-struct ipv4_header {
- integer {size = 4;} version;
- integer {size = 4;} ihl;
- integer {size = 6;} dscp;
- integer {size = 2;} ecn;
- integer {size = 16; byte_order = network;} total_length;
- integer {size = 16; byte_order = network;} identification;
- integer {size = 1;} flag_more_fragment;
- integer {size = 1;} flag_dont_fragment;
- integer {size = 1;} flag_reserved;
- integer {size = 13; byte_order = network;} fragment_offset;
- integer {size = 8;} ttl;
- integer {size = 8;} protocol;
- integer {size = 16; byte_order = network;} header_checksum;
- integer {size = 8;} src_ip_addr[4];
- integer {size = 8;} dst_ip_addr[4];
-}
-```
-Although this complex structure has more than ten independent fields,
-the generated C function would only call a 20-byte `memcpy()`, making
-it fast to record. Bits will be unpacked properly and values displayed
-in a human-readable form by the CTF reader thanks to the CTF metadata.
+###### Enumeration type member object
+The member of a CTF enumeration type.
-#### Type aliases
+If it's a string, the string is the member's label, and the members's
+value depends on the last member's value (see explanation in
+[enumeration type object documentation](#enumeration-type-object) above).
-Type aliases associate a name with a type definition. Any type may have
-any name. They are similar to C `typedef`s.
+Otherwise, it's a complete member object, with the following properties:
-Examples:
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `label` | String | Member's label | Required | N/A |
+| `value` | Integer (single value) or array of two integers (range value) | Member's value | Required | N/A |
-```
-typealias integer {
- size = 16;
- align = 4;
- signed = true;
- byte_order = network;
- base = hex;
- encoding = UTF8;
-} := my_int;
-```
+If the `value` property is an array of two integers, the member's label is
+associated to this range, both lower and upper bounds included. The array's
+first element must be lesser than or equal to the second element.
-```
-typealias floating_point {
- exp_dig = 8;
- mant_dig = 8;
- align = 16;
- byte_order = be;
-} := my_float;
-```
+**Example**:
+```yaml
+label: my enum label
+value: [-25, 78]
```
-typealias string {
- encoding = ASCII;
-} := my_string;
+
+
+##### String type object
+
+A CTF null-terminated string type.
+
+This object has no properties.
+
+**Example**:
+
+```yaml
+class: string
```
+**Equivalent C type**: `const char *`.
+
+
+##### Array type object
+
+A CTF array or sequence (variable-length array) type.
+
+**Properties**:
+
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `element-type` | [Type object](#type-objects) or string (alias name) | Type of array's elements | Required | N/A |
+| `length` | Positive integer (static array) or string (variable-length array) | Array type's length | Required | N/A |
+
+If the `length` property is a string, the array type has a
+variable length (CTF sequence). In this case, the property's value
+refers to a previous structure field. The `length` property's value
+may be prefixed with one of the following strings to indicate an
+absolute lookup within a previous (or current) dynamic scope:
+
+ * `trace.packet.header.`: trace packet header
+ * `stream.packet.context.`: stream packet context
+ * `stream.event.header.`: stream event header
+ * `stream.event.context.`: stream event context
+ * `event.context.`: event context
+ * `event.payload.`: event payload
+
+The pointed field must have an unsigned integer type.
+
+**Example** (16 bytes):
+
+```yaml
+class: array
+length: 16
+element-type:
+ class: int
+ size: 8
```
-typealias enum : uint32_t {
- ZERO,
- ONE,
- TWO,
- TEN = 10,
- ELEVEN,
- "label with spaces",
- RANGE = 23 ... 193
-} := my_enum;
+
+**Example** (variable-length array of null-terminated strings):
+
+```yaml
+class: array
+length: previous_field
+element-type:
+ class: string
```
+
+##### Structure type object
+
+A CTF structure type, i.e. a list of fields, each field
+having a name and a CTF type.
+
+**Properties**:
+
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `min-align` | Integer (positive) | Minimum alignment (bits) (power of two) | Optional | 1 |
+| `fields` | Associative array of field names (string) to [type objects](#type-objects) or strings (alias names) | Structure type's fields | Optional | `{}` |
+
+The order of the entries in the `fields` property is important; it is in
+this order that the fields are serialized in binary streams.
+
+**Example**:
+
+```yaml
+class: struct
+min-align: 32
+fields:
+ msg_id: uint8
+ src:
+ class: string
+ dst:
+ class: string
```
-typealias struct {
- uint32_t _length;
- string _other_field;
- float _hello[10][_length];
-} align(8) := my_struct;
+
+
+##### Variant type object
+
+A CTF variant type, i.e. a tagged union of CTF types.
+
+**Properties**:
+
+| Property | Type | Description | Required? | Default value |
+|---|---|---|---|---|
+| `tag` | String | Variant type's tag | Required | N/A |
+| `types` | Associative array of strings to [type objects](#type-objects) or strings (alias names) | Possible types | Required | N/A |
+
+The `tag` property's value refers to a previous structure field.
+The value may be prefixed with one of the following strings to indicate
+an absolute lookup within a previous (or current) dynamic scope:
+
+ * `trace.packet.header.`: trace packet header
+ * `stream.packet.context.`: stream packet context
+ * `stream.event.header.`: stream event header
+ * `stream.event.context.`: stream event context
+ * `event.context.`: event context
+ * `event.payload.`: event payload
+
+The pointed field must have an enumeration type. Each type name in the
+`types` property must have its equivalent member's label in this
+enumeration type. This is how a variant's type is selected using the
+value of its tag.
+
+**Example**:
+
+```yaml
+class: variant
+tag: my_choice
+types:
+ a:
+ class: string
+ b: int32
+ c:
+ class: float
+ size:
+ align: 32
+ exp: 8
+ mant: 24
```
### Running the `barectf` command
Using the `barectf` command-line utility is easy. In its simplest form,
-it outputs a few C99 files out of a CTF metadata file:
+it outputs a CTF metadata file and a few C files out of a
+YAML configuration file:
- barectf metadata
+ barectf config.yaml
-will output in the current working directory:
+will output, in the current working directory:
- * `barectf_bitfield.h`: macros used by tracing functions to pack bits
+ * `metadata`: CTF metadata file
+ * `barectf-bitfield.h`: macros used by tracing functions to pack bits
* `barectf.h`: other macros and prototypes of context/tracing functions
* `barectf.c`: context/tracing functions
-You may also want to produce `static inline` functions if your target
-system has enough memory to hold the extra code:
-
- barectf --static-inline metadata
+`barectf_` is the default name of the files and the default prefix of
+barectf C functions and structures. The prefix is read from the
+configuration file (see the
+[configuration object documentation](#configuration-object)), but
+you may override it on the command line:
-`barectf` is the default name of the files and the default prefix of
-barectf C functions and structures. You may use a custom prefix:
-
- barectf --prefix trace metadata
+ barectf --prefix my_app_ config.yaml
You may also output the files elsewhere:
- barectf --output /custom/path metadata
+ barectf --code-dir src --headers-dir include --metadata-dir ctf config.yaml
+
-### Using the generated C99 code
+### Using the generated C code
This section assumes you ran `barectf` with no options:
- barectf metadata
+ barectf config.yaml
-The command generates C99 structures and functions to initialize
-and finalize bare CTF contexts. It also generates as many tracing functions
-as there are events described in the CTF metadata file.
+The command generates C structures and functions to initialize
+barectf contexts, open packets, and close packets. It also generates as many
+tracing functions as there are events defined in the YAML configuration
+file.
-Before starting the record events, you must initialize a barectf
-context. This is done using `barectf_init()`.
+An application should never have to initialize barectf contexts,
+open packets, or close packets; this is the purpose of a specific barectf
+platform, which wraps those calls in its own initialization and
+finalization functions.
-The clock callback parameter (`clock_cb`) is used to get the clock whenever
-a tracing function is called. Each platform has its own way of obtaining
-the a clock value, so this is left to user implementation. The actual
-return type of the clock callback depends on the clock value CTF integer
-type defined in the CTF metadata.
+The barectf project provides a few platforms in the [`platforms`](platforms)
+directory. Each one contains a `README.md` file explaining how to use
+the platform. If you're planning to write your own platform,
+read the next subsection. Otherwise, skip it.
-The `barectf_init()` function name will contain the decimal stream
-ID if you have more than one stream. You must allocate the context
-structure yourself.
-Example:
+#### Writing a barectf platform
-```c
-struct barectf_ctx* barectf_ctx = platform_alloc(sizeof(*barectf_ctx));
+A **_barectf platform_** is responsible for:
-barectf_init(barectf_ctx, buf, 8192, platform_get_clock, NULL);
-```
+ 1. Providing some initialization and finalization functions
+ for the tracing infrastructure of the target. The initialization
+ function is responsible for initializing a barectf context,
+ providing the platform callback functions, and for opening the very
+ first stream packet(s). The finalization function is responsible
+ for closing, usually when not empty, the very last stream
+ packet(s).
+ 2. Implementing the platform callback functions to accomodate the target
+ system. The main purposes of those callback functions are:
+ * Getting the current value of clock(s).
+ * Doing something with a packet once it's full. This is how
+ a ring buffer of packets may be implemented. The platform
+ may also be naive and write the full packets to the file system
+ directly.
-This initializes a barectf context with a buffer of 8192 bytes.
+Thus, the traced application itself should never have to call
+the barectf initialization, packet opening, and packet closing
+funcions. The application only deals with initializing/finalizing
+the platform, and calling the tracing functions.
-After the barectf context is initialized, open a packet using
-`barectf_open_packet()`. If you have any non-special fields in
-your stream packet context, `barectf_open_packet()` accepts a
-parameter for each of them since the packet context is written
-at this moment:
+The following diagram shows how each part connects with
+each other:
-```
-barectf_open_packet(barectf_ctx);
-```
+![](http://0x3b.org/ss/placoderm625.png)
+
+The following subsections explain what should exist in each
+platform function.
-Once the packet is opened, you may call any of the tracing functions to record
-CTF events into the context's buffer.
-As an example, let's take the following CTF event definition:
+##### Platform initialization function
+A barectf platform initialization function is responsible for
+initializing barectf context(s) (calling `barectf_init()`,
+where `barectf_` is the configured prefix), and opening the very
+first packet (calling `barectf_stream_open_packet()` with
+target-specific parameters, for each stream, where `stream` is
+the stream name).
+
+barectf generates one context C structure for each defined stream.
+They all contain the same first member, a structure with common
+properties.
+
+barectf generates a single context initialization function:
+
+```c
+void barectf_init(
+ void *ctx,
+ uint8_t *buf,
+ uint32_t buf_size,
+ struct barectf_platform_callbacks cbs,
+ void *data
+);
```
-event {
- name = "my_event";
- id = 0;
- stream_id = 0;
- fields := struct {
- integer {size = 32;} _a;
- integer {size = 14; signed = true;} _b;
- floating_point {exp_dig = 8; mant_dig = 24; align = 32;} _c;
- struct {
- uint32_t _a;
- uint32_t _b;
- } _d;
- string _e;
+
+This function must be called with each stream-specific context
+structure to be used afterwards. The parameters are:
+
+ * `ctx`: stream-specific barectf context (allocated by caller)
+ * `buf`: buffer to use for this stream's packet (allocated by caller)
+ * `buf_size`: size of `buf` in bytes
+ * `cbs`: platform callback functions to be used with this
+ stream-specific context
+ * `data`: user data passed to platform callback functions (`cbs`)
+
+**Example**:
+
+```c
+#define BUF_SZ 4096
+
+void platform_init(/* ... */)
+{
+ struct barectf_my_stream_ctx *ctx;
+ uint8_t *buf;
+ struct my_data *my_data;
+ struct barectf_platform_callbacks cbs = {
+ /* ... */
};
-};
+
+ ctx = platform_alloc(sizeof(*ctx));
+ buf = platform_alloc(BUF_SZ);
+ my_data = platform_alloc(sizeof(*my_data));
+ my_data->ctx = ctx;
+ barectf_init(ctx, buf, BUF_SZ, cbs, my_data);
+
+ /* ... */
+}
```
-In this example, we assume the stream event context and the event context
-are not defined for this event. `barectf` generates the following tracing
-function prototype:
+barectf generates one packet opening and one packet closing
+function per defined stream, since each stream may have custom
+parameters at the packet opening time, and custom offsets of
+fields to write at packet closing time.
+
+The platform initialization should open the very first packet
+of each stream to use because the tracing functions expect the
+current packet to be opened.
+
+Here's an example of a packet opening function prototype:
```c
-int barectf_trace_my_event(
- struct barectf_ctx* ctx,
- uint32_t param_ef__a,
- int16_t param_ef__b,
- float param_ef__c,
- const void* param_ef__d,
- const char* param_ef__e
+void barectf_my_stream_open_packet(
+ struct barectf_my_stream_ctx *ctx,
+ float spc_something
);
```
-When called, this function first calls the clock callback to get a clock
-value as soon as possible. It then proceeds to record each field with
-proper alignment and updates the barectf context. On success, 0 is returned.
-Otherwise, one of the following negative errors is returned:
+The function needs the stream-specific barectf context, as well as any
+custom trace packet header or stream packet context field; in this
+last example, `something` is a floating point number stream packet context
+field.
+
+
+##### barectf packet information API
+
+There's a small API to query stuff about the current packet of a
+given barectf context:
+
+```c
+uint32_t barectf_packet_size(void *ctx);
+int barectf_packet_is_full(void *ctx);
+int barectf_packet_is_empty(void *ctx);
+uint32_t barectf_packet_events_discarded(void *ctx);
+uint8_t *barectf_packet_buf(void *ctx);
+void barectf_packet_set_buf(void *ctx, uint8_t *buf, uint32_t buf_size);
+uint32_t barectf_packet_buf_size(void *ctx);
+int barectf_packet_is_open(void *ctx);
+```
+
+`barectf_packet_is_full()` returns 1 if the context's current packet
+is full (no space left for any event), 0 otherwise.
+
+`barectf_packet_is_empty()` returns 1 if the context's current packet
+is empty (no recorded events), 0 otherwise.
+
+`barectf_packet_events_discarded()` returns the number of lost (discarded)
+events _so far_ for a given stream.
+
+The buffer size (`buf_size` parameter of `barectf_packet_set_buf()` and
+return value of `barectf_packet_buf_size()`) is always a number of bytes.
+
+`barectf_packet_is_open()` returns 1 if the context's current packet
+is open (the packet opening function was called with this context).
- * `-EBARECTF_NOSPC`: no space left in the context's buffer; the event
- was **not** recorded. You should call `barectf_close_packet()` to finalize the
- CTF packet.
-`barectf_close_packet()` may be called at any time.
-When `barectf_close_packet()` returns, the packet is complete and ready
-to be read by a CTF reader. CTF packets may be concatenated in a single
-CTF stream file. You may reuse the same context and buffer to record another
-CTF packet, as long as you call `barectf_open_packet()` before calling any
-tracing function.
+##### Platform callback functions
+
+The callback functions to implement for a given platform are
+in the generated `barectf_platform_callbacks` C structure. This
+structure will contain:
+
+ * One callback function per defined clock, using the clock's
+ return C type. Those functions must return the current clock
+ values.
+ * `is_backend_full()`: is the back-end full? If a new packet
+ is opened now, does it have its reserved space in the back-end?
+ Return 0 if it does, 1 otherwise.
+ * `open_packet()`: this callback function **must** call the relevant
+ packet opening function.
+ * `close_packet()`: this callback function **must** call the
+ relevant packet closing function _and_ copy/move the current packet
+ to the back-end.
+
+What exactly is a _back-end_ is left to the platform implementor. It
+could be a ring buffer of packets, or it could be dumber: `close_packet()`
+always appends the current packet to some medium, and `is_backend_full()`
+always returns 0 (back-end is never full).
+
+Typically, if `is_backend_full()` returns 0, then the next
+call to `close_packet()` should be able to write the current packet.
+If `is_backend_full()` returns 1, there will be lost (discarded)
+events. If a stream packet context has an `events_discarded` field,
+it will be written to accordingly when a packet is closed.
+
+If a platform needs double buffering, `open_packet()` is the callback
+function where packet buffers would be swapped (before calling
+the barectf packet opening function).
+
+
+##### Platform finalization function
+
+The platform finalization function should be called by the application
+when tracing is no more required. It is responsible for closing the
+very last packet of each stream.
+
+Typically, assuming there's only one stream (named `my_stream` in this
+example), the finalization function will look like this:
+
+```c
+void platform_tracing_finalize(struct platform_data *platform_data)
+{
+ if (barectf_packet_is_open(platform_data->ctx) &&
+ !barectf_packet_is_empty(platform_data->ctx)) {
+ barectf_my_stream_close_packet(platform_data->ctx);
+
+ /*
+ * Do whatever is necessary here to write the packet
+ * to the platform's back-end.
+ */
+ }
+}
+```
+
+That is: if the packet is still open (thus not closed and written yet)
+_and_ it contains at least one event (not empty), close and write the last
+packet.
+
+Note, however, that you might be interested in closing an open empty
+packet, since its packet context could update the discarded events count
+(if there were lost events between the last packet closing time and
+now, which is quite possible if the back-end became full after closing
+and writing the previous packet).
+
+
+#### Calling the generated tracing functions
+
+Calling the generated tracing functions is what the traced application
+actually does.
+
+For a given prefix named `barectf`, a given stream named `stream`, and
+a given event named `event`, the generated tracing function name is
+`barectf_stream_trace_event()`.
+
+The first parameter of a tracing function is always the stream-specific
+barectf context. Then, in this order:
+
+ * One parameter for each custom event header field
+ (prefixed with `seh_`)
+ * One parameter for each custom stream event context field
+ (prefixed with `sec_`)
+ * One parameter for each custom event context field
+ (prefixed with `ec_`)
+ * One parameter for each custom event payload field
+ (prefixed with `ep_`)
+
+A tracing function returns nothing: it either succeeds (the event
+is serialized in the current packet) or fails when there's no
+space left (the context's discarded events count is incremented).
+
+**Example**:
+
+Given the following [event object](#event-object), named `my_event`,
+placed in a stream named `default` with no custom event header/stream event
+context fields:
+
+```yaml
+context-type:
+ class: struct
+ fields:
+ msg_id:
+ class: int
+ size: 16
+payload-type:
+ class: struct
+ fields:
+ src:
+ class: string
+ dst:
+ class: string
+ a_id:
+ class: int
+ size: 3
+ b_id:
+ class: int
+ size: 7
+ signed: true
+ c_id:
+ class: int
+ size: 15
+ amt:
+ class: float
+ align: 32
+ size:
+ exp: 8
+ mant: 24
+```
+
+barectf will generate the following tracing function prototype:
+
+```c
+/* trace (stream "default", event "my_event") */
+void barectf_default_trace_my_event(
+ struct barectf_default_ctx *ctx,
+ uint16_t ec_msg_id,
+ const char *ep_src,
+ const char *ep_dst,
+ uint8_t ep_a_id,
+ int8_t ep_b_id,
+ uint16_t ep_c_id,
+ float amt
+);
+```
### Reading CTF traces
-To form a complete CTF trace, put your CTF metadata file (it should be
-named `metadata`) and your binary stream files (concatenations of CTF
-packets written by C code generated by barectf) in the same directory.
+To form a complete CTF trace, the `metadata` file generated by the
+`barectf` command-line tool and the binary stream files generated
+by the application (or by an external consumer, depending on the
+platform) should be placed in the same directory.
To read a CTF trace, use [Babeltrace](http://www.efficios.com/babeltrace).
-Babeltrace is packaged by most major distributions (`babeltrace`).
-Babeltrace ships with a command-line utility that can convert a CTF trace
-to human-readable text output. Also, it includes a Python binding so
-that you may analyze a CTF trace using a custom script.
+Babeltrace is packaged by most major distributions as the `babeltrace`
+package. Babeltrace ships with a command-line utility that can convert a
+CTF trace to human-readable text output. Also, it includes Python bindings
+so that you may analyze a CTF trace using a custom script.
In its simplest form, the `babeltrace` command-line converter is quite
easy to use:
babeltrace /path/to/directory/containing/ctf/files
-See `babeltrace --help` for more options.
-
-You may also use the Python 3 binding of Babeltrace to create custom
-analysis scripts.
+See `babeltrace --help` and `man babeltrace` for more options.
# The MIT License (MIT)
#
-# Copyright (c) 2014-2015 Philippe Proulx <philippe.proulx@efficios.com>
+# Copyright (c) 2014-2015 Philippe Proulx <pproulx@efficios.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
-__version__ = '0.3.1'
+__version__ = '2.0.0-dev'
+
+
+def _split_version_suffix():
+ return __version__.split('-')
+
+
+def get_version_tuple():
+ version, suffix = _split_version_suffix()
+ parts = version.split('.')
+
+ return (int(parts[0]), int(parts[1]), int(parts[2]))
+
+
+def get_version_suffix():
+ return _split_version_suffix()[1]
# The MIT License (MIT)
#
-# Copyright (c) 2014-2015 Philippe Proulx <philippe.proulx@efficios.com>
+# Copyright (c) 2014-2015 Philippe Proulx <pproulx@efficios.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# THE SOFTWARE.
from termcolor import cprint, colored
-import barectf.templates
-import pytsdl.parser
-import pytsdl.tsdl
-import collections
+import barectf.tsdl182gen
+import barectf.config
+import barectf.gen
import argparse
+import os.path
import barectf
import sys
import os
import re
-def _perror(msg, exit_code=1):
- cprint('error: {}'.format(msg), 'red', attrs=['bold'], file=sys.stderr)
- sys.exit(exit_code)
+def _perror(msg):
+ cprint('Error: ', 'red', end='', file=sys.stderr)
+ cprint(msg, 'red', attrs=['bold'], file=sys.stderr)
+ sys.exit(1)
-def _pinfo(msg):
- cprint(':: {}'.format(msg), 'blue', attrs=['bold'])
+def _pconfig_error(e):
+ lines = []
+
+ while True:
+ if e is None:
+ break
+
+ lines.append(str(e))
+
+ if not hasattr(e, 'prev'):
+ break
+
+ e = e.prev
+
+ if len(lines) == 1:
+ _perror(lines[0])
+
+ cprint('Error:', 'red', file=sys.stderr)
+
+ for i, line in enumerate(lines):
+ suf = ':' if i < len(lines) - 1 else ''
+ cprint(' ' + line + suf, 'red', attrs=['bold'], file=sys.stderr)
+
+ sys.exit(1)
def _psuccess(msg):
def _parse_args():
ap = argparse.ArgumentParser()
- ap.add_argument('-O', '--output', metavar='OUTPUT', action='store',
+ ap.add_argument('-c', '--code-dir', metavar='DIR', action='store',
+ default=os.getcwd(),
+ help='output directory of C source file')
+ ap.add_argument('-H', '--headers-dir', metavar='DIR', action='store',
+ default=os.getcwd(),
+ help='output directory of C header files')
+ ap.add_argument('-m', '--metadata-dir', metavar='DIR', action='store',
default=os.getcwd(),
- help='output directory of C files')
+ help='output directory of CTF metadata')
ap.add_argument('-p', '--prefix', metavar='PREFIX', action='store',
- default='barectf',
- help='custom prefix for C function and structure names')
- ap.add_argument('-s', '--static-inline', action='store_true',
- help='generate static inline C functions')
- ap.add_argument('-c', '--manual-clock', action='store_true',
- help='do not use a clock callback: pass clock value to tracing functions')
+ help='override configuration\'s prefix')
ap.add_argument('-V', '--version', action='version',
- version='%(prog)s v{}'.format(barectf.__version__))
- ap.add_argument('metadata', metavar='METADATA', action='store',
- help='CTF metadata input file')
+ version='%(prog)s {}'.format(barectf.__version__))
+ ap.add_argument('config', metavar='CONFIG', action='store',
+ help='barectf YAML configuration file')
# parse args
args = ap.parse_args()
- # validate output directory
- if not os.path.isdir(args.output):
- _perror('"{}" is not an existing directory'.format(args.output))
+ # validate output directories
+ for d in [args.code_dir, args.headers_dir, args.metadata_dir]:
+ if not os.path.isdir(d):
+ _perror('"{}" is not an existing directory'.format(d))
- # validate prefix
- if not re.match(r'^[a-zA-Z_][a-zA-Z0-9_]*$', args.prefix):
- _perror('"{}" is not a valid C identifier'.format(args.prefix))
-
- # validate that metadata file exists
- if not os.path.isfile(args.metadata):
- _perror('"{}" is not an existing file'.format(args.metadata))
+ # validate that configuration file exists
+ if not os.path.isfile(args.config):
+ _perror('"{}" is not an existing file'.format(args.config))
return args
-class _CBlock(list):
- pass
-
-
-class _CLine(str):
- pass
-
-
-class BarectfCodeGenerator:
- _CTX_AT = 'ctx->at'
- _CTX_BUF = 'ctx->buf'
- _CTX_PACKET_SIZE = 'ctx->packet_size'
- _CTX_BUF_AT = '{}[{} >> 3]'.format(_CTX_BUF, _CTX_AT)
- _CTX_BUF_AT_ADDR = '&{}'.format(_CTX_BUF_AT)
- _CTX_CALL_CLOCK_CB = 'ctx->clock_cb(ctx->clock_cb_data)'
-
- _BO_SUFFIXES_MAP = {
- pytsdl.tsdl.ByteOrder.BE: 'be',
- pytsdl.tsdl.ByteOrder.LE: 'le',
- }
-
- _TSDL_TYPE_NAMES_MAP = {
- pytsdl.tsdl.Integer: 'integer',
- pytsdl.tsdl.FloatingPoint: 'floating point',
- pytsdl.tsdl.Enum: 'enumeration',
- pytsdl.tsdl.String: 'string',
- pytsdl.tsdl.Array: 'static array',
- pytsdl.tsdl.Sequence: 'dynamic array',
- pytsdl.tsdl.Struct: 'structure',
- }
-
- def __init__(self):
- self._parser = pytsdl.parser.Parser()
-
- self._obj_size_cb = {
- pytsdl.tsdl.Struct: self._get_struct_size,
- pytsdl.tsdl.Integer: self._get_integer_size,
- pytsdl.tsdl.Enum: self._get_enum_size,
- pytsdl.tsdl.FloatingPoint: self._get_floating_point_size,
- pytsdl.tsdl.Array: self._get_array_size,
- }
-
- self._obj_alignment_cb = {
- pytsdl.tsdl.Struct: self._get_struct_alignment,
- pytsdl.tsdl.Integer: self._get_integer_alignment,
- pytsdl.tsdl.Enum: self._get_enum_alignment,
- pytsdl.tsdl.FloatingPoint: self._get_floating_point_alignment,
- pytsdl.tsdl.Array: self._get_array_alignment,
- pytsdl.tsdl.Sequence: self._get_sequence_alignment,
- pytsdl.tsdl.String: self._get_string_alignment,
- }
-
- self._obj_param_ctype_cb = {
- pytsdl.tsdl.Struct: lambda obj: 'const void*',
- pytsdl.tsdl.Integer: self._get_integer_param_ctype,
- pytsdl.tsdl.Enum: self._get_enum_param_ctype,
- pytsdl.tsdl.FloatingPoint: self._get_floating_point_param_ctype,
- pytsdl.tsdl.Array: lambda obj: 'const void*',
- pytsdl.tsdl.Sequence: lambda obj: 'const void*',
- pytsdl.tsdl.String: lambda obj: 'const char*',
- }
-
- self._write_field_obj_cb = {
- pytsdl.tsdl.Struct: self._write_field_struct,
- pytsdl.tsdl.Integer: self._write_field_integer,
- pytsdl.tsdl.Enum: self._write_field_enum,
- pytsdl.tsdl.FloatingPoint: self._write_field_floating_point,
- pytsdl.tsdl.Array: self._write_field_array,
- pytsdl.tsdl.Sequence: self._write_field_sequence,
- pytsdl.tsdl.String: self._write_field_string,
- }
-
- self._get_src_name_funcs = {
- 'trace.packet.header.': self._get_tph_src_name,
- 'env.': self._get_env_src_name,
- 'stream.packet.context.': self._get_spc_src_name,
- 'stream.event.header.': self._get_seh_src_name,
- 'stream.event.context.': self._get_sec_src_name,
- 'event.context.': self._get_ec_src_name,
- 'event.fields.': self._get_ef_src_name,
- }
-
- # Finds the terminal element of a TSDL array/sequence.
- #
- # arrayseq: array or sequence
- def _find_arrayseq_element(self, arrayseq):
- el = arrayseq.element
- t = type(arrayseq.element)
-
- if t is pytsdl.tsdl.Array or t is pytsdl.tsdl.Sequence:
- return self._find_arrayseq_element(el)
-
- return el
-
- # Validates an inner TSDL structure's field (constrained structure).
- #
- # fname: field name
- # ftype: TSDL object
- def _validate_struct_field(self, fname, ftype, inner_struct):
- if type(ftype) is pytsdl.tsdl.Sequence:
- if inner_struct:
- raise RuntimeError('field "{}" is a dynamic array (not allowed here)'.format(fname))
- else:
- element = self._find_arrayseq_element(ftype)
- self._validate_struct_field(fname, element, True)
- elif type(ftype) is pytsdl.tsdl.Array:
- # we need to check every element until we find a terminal one
- element = self._find_arrayseq_element(ftype)
- self._validate_struct_field(fname, element, True)
- elif type(ftype) is pytsdl.tsdl.Variant:
- raise RuntimeError('field "{}" contains a variant (unsupported)'.format(fname))
- elif type(ftype) is pytsdl.tsdl.String:
- if inner_struct:
- raise RuntimeError('field "{}" contains a string (not allowed here)'.format(fname))
- elif type(ftype) is pytsdl.tsdl.Struct:
- self._validate_struct(ftype, True)
- elif type(ftype) is pytsdl.tsdl.Integer:
- if self._get_obj_size(ftype) > 64:
- raise RuntimeError('integer field "{}" larger than 64-bit'.format(fname))
- elif type(ftype) is pytsdl.tsdl.FloatingPoint:
- if self._get_obj_size(ftype) > 64:
- raise RuntimeError('floating point field "{}" larger than 64-bit'.format(fname))
- elif type(ftype) is pytsdl.tsdl.Enum:
- if self._get_obj_size(ftype) > 64:
- raise RuntimeError('enum field "{}" larger than 64-bit'.format(fname))
-
- # Validates an inner TSDL structure (constrained).
- #
- # struct: TSDL structure to validate
- def _validate_struct(self, struct, inner_struct):
- # just in case we call this with the wrong type
- if type(struct) is not pytsdl.tsdl.Struct:
- raise RuntimeError('expecting a struct')
-
- # make sure inner structures are at least byte-aligned
- if inner_struct:
- if self._get_obj_alignment(struct) < 8:
- raise RuntimeError('inner struct must be at least byte-aligned')
-
- # check each field
- for fname, ftype in struct.fields.items():
- self._validate_struct_field(fname, ftype, inner_struct)
-
- # Validates a context or fields structure.
- #
- # struct: context/fields TSDL structure
- def _validate_context_fields(self, struct):
- if type(struct) is not pytsdl.tsdl.Struct:
- raise RuntimeError('expecting a struct')
-
- self._validate_struct(struct, False)
-
- # Validates a TSDL integer with optional constraints.
- #
- # integer: TSDL integer to validate
- # size: expected size (None for any size)
- # align: expected alignment (None for any alignment)
- # signed: expected signedness (None for any signedness)
- def _validate_integer(self, integer, size=None, align=None,
- signed=None):
- if type(integer) is not pytsdl.tsdl.Integer:
- raise RuntimeError('expected integer')
-
- if size is not None:
- if integer.size != size:
- raise RuntimeError('expected {}-bit integer'.format(size))
-
- if align is not None:
- if integer.align != align:
- raise RuntimeError('expected integer with {}-bit alignment'.format(align))
-
- if signed is not None:
- if integer.signed != signed:
- raise RuntimeError('expected {} integer'.format('signed' if signed else 'unsigned'))
-
- # Validates a packet header.
- #
- # packet_header: packet header TSDL structure to validate
- def _validate_tph(self, packet_header):
- try:
- self._validate_struct(packet_header, True)
- except RuntimeError as e:
- _perror('packet header: {}'.format(e))
-
- # magic must be the first field
- if 'magic' in packet_header.fields:
- if list(packet_header.fields.keys())[0] != 'magic':
- _perror('packet header: "magic" must be the first field')
- else:
- _perror('packet header: missing "magic" field')
-
- # magic must be a 32-bit unsigned integer, 32-bit aligned
- try:
- self._validate_integer(packet_header['magic'], 32, 32, False)
- except RuntimeError as e:
- _perror('packet header: "magic": {}'.format(e))
-
- # mandatory stream_id
- if 'stream_id' not in packet_header.fields:
- _perror('packet header: missing "stream_id" field')
-
- # stream_id must be an unsigned integer
- try:
- self._validate_integer(packet_header['stream_id'], signed=False)
- except RuntimeError as e:
- _perror('packet header: "stream_id": {}'.format(e))
-
- # only magic and stream_id allowed
- if len(packet_header.fields) != 2:
- _perror('packet header: only "magic" and "stream_id" fields are allowed')
-
- # Converts a list of strings to a dotted representation. For
- # example, ['trace', 'packet', 'header', 'magic'] is converted to
- # 'trace.packet.header.magic'.
- #
- # name: list of strings to convert
- def _dot_name_to_str(self, name):
- return '.'.join(name)
-
- # Compares two TSDL integers. Returns True if they are the same.
- #
- # int1: first TSDL integer
- # int2: second TSDL integer
- def _compare_integers(self, int1, int2):
- if type(int1) is not pytsdl.tsdl.Integer:
- return False
-
- if type(int2) is not pytsdl.tsdl.Integer:
- return False
-
- size = int1.size == int2.size
- align = int1.align == int2.align
- cmap = int1.map == int2.map
- base = int1.base == int2.base
- encoding = int1.encoding == int2.encoding
- signed = int1.signed == int2.signed
- comps = (size, align, cmap, base, encoding, signed)
-
- # True means 1 for sum()
- return sum(comps) == len(comps)
-
- # Validates a packet context.
- #
- # stream: TSDL stream containing the packet context to validate
- def _validate_spc(self, stream):
- packet_context = stream.packet_context
- sid = stream.id
-
- try:
- self._validate_struct(packet_context, True)
- except RuntimeError as e:
- _perror('stream {}: packet context: {}'.format(sid, e))
-
- fields = packet_context.fields
-
- # if timestamp_begin exists, timestamp_end must exist
- if 'timestamp_begin' in fields or 'timestamp_end' in fields:
- if 'timestamp_begin' not in fields or 'timestamp_end' not in fields:
- _perror('stream {}: packet context: "timestamp_begin" must exist if "timestamp_end" exists'.format(sid))
- else:
- # timestamp_begin and timestamp_end must have the same integer
- # as the event header's timestamp field (should exist by now)
- timestamp = stream.event_header['timestamp']
-
- if not self._compare_integers(fields['timestamp_begin'], timestamp):
- _perror('stream {}: packet context: "timestamp_begin": integer type different from event header\'s "timestamp" field'.format(sid))
-
- if not self._compare_integers(fields['timestamp_end'], timestamp):
- _perror('stream {}: packet context: "timestamp_end": integer type different from event header\'s "timestamp" field'.format(sid))
-
- # content_size must exist and be an unsigned integer
- if 'content_size' not in fields:
- _perror('stream {}: packet context: missing "content_size" field'.format(sid))
-
- try:
- self._validate_integer(fields['content_size'], 32, 32, False)
- except:
- try:
- self._validate_integer(fields['content_size'], 64, 64, False)
- except:
- _perror('stream {}: packet context: "content_size": expecting a 32-bit-aligned 32-bit integer, or a 64-bit-aligned 64-bit integer'.format(sid))
-
- # packet_size must exist and be an unsigned integer
- if 'packet_size' not in fields:
- _perror('stream {}: packet context: missing "packet_size" field'.format(sid))
-
- try:
- self._validate_integer(fields['packet_size'], 32, 32, False)
- except:
- try:
- self._validate_integer(fields['packet_size'], 64, 64, False)
- except:
- _perror('stream {}: packet context: "packet_size": expecting a 32-bit-aligned 32-bit integer, or a 64-bit-aligned 64-bit integer'.format(sid))
-
- # if cpu_id exists, must be an unsigned integer
- if 'cpu_id' in fields:
- try:
- self._validate_integer(fields['cpu_id'], signed=False)
- except RuntimeError as e:
- _perror('stream {}: packet context: "cpu_id": {}'.format(sid, e))
-
- # if events_discarded exists, must be an unsigned integer
- if 'events_discarded' in fields:
- try:
- self._validate_integer(fields['events_discarded'], signed=False)
- except RuntimeError as e:
- _perror('stream {}: packet context: "events_discarded": {}'.format(sid, e))
-
- # Validates an event header.
- #
- # stream: TSDL stream containing the event header to validate
- def _validate_seh(self, stream):
- event_header = stream.event_header
- sid = stream.id
-
- try:
- self._validate_struct(event_header, True)
- except RuntimeError as e:
- _perror('stream {}: event header: {}'.format(sid, e))
-
- fields = event_header.fields
-
- # id must exist and be an unsigned integer
- if 'id' not in fields:
- _perror('stream {}: event header: missing "id" field'.format(sid))
-
- try:
- self._validate_integer(fields['id'], signed=False)
- except RuntimeError as e:
- _perror('stream {}: "id": {}'.format(sid, format(e)))
-
- # timestamp must exist, be an unsigned integer and be mapped to a valid clock
- if 'timestamp' not in fields:
- _perror('stream {}: event header: missing "timestamp" field'.format(sid))
-
- try:
- self._validate_integer(fields['timestamp'], signed=False)
- except RuntimeError as e:
- _perror('stream {}: event header: "timestamp": {}'.format(sid, format(e)))
-
- if fields['timestamp'].map is None:
- _perror('stream {}: event header: "timestamp" must be mapped to a valid clock'.format(sid))
-
- # id must be the first field, followed by timestamp
- if list(fields.keys())[0] != 'id':
- _perror('stream {}: event header: "id" must be the first field'.format(sid))
-
- if list(fields.keys())[1] != 'timestamp':
- _perror('stream {}: event header: "timestamp" must be the second field'.format(sid))
-
- # only id and timestamp and allowed in event header
- if len(fields) != 2:
- _perror('stream {}: event header: only "id" and "timestamp" fields are allowed'.format(sid))
-
- # Validates a strean event context.
- #
- # stream: TSDL stream containing the stream event context
- def _validate_sec(self, stream):
- stream_event_context = stream.event_context
- sid = stream.id
-
- if stream_event_context is None:
- return
-
- try:
- self._validate_context_fields(stream_event_context)
- except RuntimeError as e:
- _perror('stream {}: event context: {}'.format(sid, e))
-
- # Validates an event context.
- #
- # stream: TSDL stream containing the TSDL event
- # event: TSDL event containing the context to validate
- def _validate_ec(self, stream, event):
- event_context = event.context
- sid = stream.id
- eid = event.id
-
- if event_context is None:
- return
-
- try:
- self._validate_context_fields(event_context)
- except RuntimeError as e:
- _perror('stream {}: event {}: context: {}'.format(sid, eid, e))
-
- # Validates an event fields.
- #
- # stream: TSDL stream containing the TSDL event
- # event: TSDL event containing the fields to validate
- def _validate_ef(self, stream, event):
- event_fields = event.fields
- sid = stream.id
- eid = event.id
-
- try:
- self._validate_context_fields(event_fields)
- except RuntimeError as e:
- _perror('stream {}: event {}: fields: {}'.format(sid, eid, e))
-
- # Validates a TSDL event.
- #
- # stream: TSDL stream containing the TSDL event
- # event: TSDL event to validate
- def _validate_event(self, stream, event):
- # name must be a compatible C identifier
- if not re.match(r'^[a-zA-Z_][a-zA-Z0-9_]*$', event.name):
- fmt = 'stream {}: event {}: malformed event name: "{}"'
- _perror(fmt.format(stream.id, event.id, event.name))
-
- self._validate_ec(stream, event)
- self._validate_ef(stream, event)
-
- # Validates a TSDL stream.
- #
- # stream: TSDL stream to validate
- def _validate_stream(self, stream):
- self._validate_seh(stream)
- self._validate_spc(stream)
- self._validate_sec(stream)
-
- # event stuff
- for event in stream.events:
- self._validate_event(stream, event)
-
- # Validates all TSDL scopes of the current TSDL document.
- def _validate_all_scopes(self):
- # packet header
- self._validate_tph(self._doc.trace.packet_header)
-
- # stream stuff
- for stream in self._doc.streams.values():
- self._validate_stream(stream)
-
- # Validates the trace block.
- def _validate_trace(self):
- # make sure a native byte order is specified
- if self._doc.trace.byte_order is None:
- _perror('native byte order (trace.byte_order) is not specified')
-
- # Validates the current TSDL document.
- def _validate_metadata(self):
- self._validate_trace()
- self._validate_all_scopes()
-
- # Returns an aligned number.
- #
- # 3, 4 -> 4
- # 4, 4 -> 4
- # 5, 4 -> 8
- # 6, 4 -> 8
- # 7, 4 -> 8
- # 8, 4 -> 8
- # 9, 4 -> 12
- #
- # at: number to align
- # align: alignment (power of two)
- def _get_alignment(self, at, align):
- return (at + align - 1) & -align
-
- # Converts a tree of offset variables:
- #
- # field
- # a -> 0
- # b -> 8
- # other_struct
- # field -> 16
- # yeah -> 20
- # c -> 32
- # len -> 36
- #
- # to a flat dict:
- #
- # field_a -> 0
- # field_b -> 8
- # field_other_struct_field -> 16
- # field_other_struct_yeah -> 20
- # field_c -> 32
- # len -> 36
- #
- # offvars_tree: tree of offset variables
- # prefix: offset variable name prefix
- # offvars: flattened offset variables
- def _flatten_offvars_tree(self, offvars_tree, prefix=None,
- offvars=None):
- if offvars is None:
- offvars = collections.OrderedDict()
-
- for name, offset in offvars_tree.items():
- if prefix is not None:
- varname = '{}_{}'.format(prefix, name)
- else:
- varname = name
-
- if isinstance(offset, dict):
- self._flatten_offvars_tree(offset, varname, offvars)
- else:
- offvars[varname] = offset
-
- return offvars
-
- # Returns the size of a TSDL structure with _static size_ (must be
- # validated first).
- #
- # struct: TSDL structure of which to get the size
- # offvars_tree: optional offset variables tree (output)
- # base_offset: base offsets for offset variables
- def _get_struct_size(self, struct,
- offvars_tree=None,
- base_offset=0):
- if offvars_tree is None:
- offvars_tree = collections.OrderedDict()
-
- offset = 0
-
- for fname, ftype in struct.fields.items():
- field_alignment = self._get_obj_alignment(ftype)
- offset = self._get_alignment(offset, field_alignment)
-
- if type(ftype) is pytsdl.tsdl.Struct:
- offvars_tree[fname] = collections.OrderedDict()
- sz = self._get_struct_size(ftype, offvars_tree[fname],
- base_offset + offset)
- else:
- # only integers may act as sequence lengths
- if type(ftype) is pytsdl.tsdl.Integer:
- offvars_tree[fname] = base_offset + offset
-
- sz = self._get_obj_size(ftype)
-
- offset += sz
-
- return offset
-
- # Returns the size of a TSDL array.
- #
- # array: TSDL array of which to get the size
- def _get_array_size(self, array):
- element = array.element
-
- # effective size of one element includes its alignment after its size
- size = self._get_obj_size(element)
- align = self._get_obj_alignment(element)
-
- return self._get_alignment(size, align) * array.length
-
- # Returns the size of a TSDL enumeration.
- #
- # enum: TSDL enumeration of which to get the size
- def _get_enum_size(self, enum):
- return self._get_obj_size(enum.integer)
-
- # Returns the size of a TSDL floating point number.
- #
- # floating_point: TSDL floating point number of which to get the size
- def _get_floating_point_size(self, floating_point):
- return floating_point.exp_dig + floating_point.mant_dig
-
- # Returns the size of a TSDL integer.
- #
- # integer: TSDL integer of which to get the size
- def _get_integer_size(self, integer):
- return integer.size
-
- # Returns the size of a TSDL type.
- #
- # obj: TSDL type of which to get the size
- def _get_obj_size(self, obj):
- return self._obj_size_cb[type(obj)](obj)
-
- # Returns the alignment of a TSDL structure.
- #
- # struct: TSDL structure of which to get the alignment
- def _get_struct_alignment(self, struct):
- if struct.align is not None:
- return struct.align
-
- cur_align = 1
-
- for fname, ftype in struct.fields.items():
- cur_align = max(self._get_obj_alignment(ftype), cur_align)
-
- return cur_align
-
- # Returns the alignment of a TSDL integer.
- #
- # integer: TSDL integer of which to get the alignment
- def _get_integer_alignment(self, integer):
- return integer.align
-
- # Returns the alignment of a TSDL floating point number.
- #
- # floating_point: TSDL floating point number of which to get the
- # alignment
- def _get_floating_point_alignment(self, floating_point):
- return floating_point.align
-
- # Returns the alignment of a TSDL enumeration.
- #
- # enum: TSDL enumeration of which to get the alignment
- def _get_enum_alignment(self, enum):
- return self._get_obj_alignment(enum.integer)
-
- # Returns the alignment of a TSDL string.
- #
- # string: TSDL string of which to get the alignment
- def _get_string_alignment(self, string):
- return 8
-
- # Returns the alignment of a TSDL array.
- #
- # array: TSDL array of which to get the alignment
- def _get_array_alignment(self, array):
- return self._get_obj_alignment(array.element)
-
- # Returns the alignment of a TSDL sequence.
- #
- # sequence: TSDL sequence of which to get the alignment
- def _get_sequence_alignment(self, sequence):
- return self._get_obj_alignment(sequence.element)
-
- # Returns the alignment of a TSDL type.
- #
- # obj: TSDL type of which to get the alignment
- def _get_obj_alignment(self, obj):
- return self._obj_alignment_cb[type(obj)](obj)
-
- # Converts a field name to a C parameter name.
- #
- # You should not use this function directly, but rather use one
- # of the _*_fname_to_pname() variants depending on your scope.
- #
- # prefix: parameter name prefix
- # fname: field name
- def _fname_to_pname(self, prefix, fname):
- return 'param_{}_{}'.format(prefix, fname)
-
- # Converts an event fields field name to a C parameter name.
- #
- # fname: field name
- def _ef_fname_to_pname(self, fname):
- return self._fname_to_pname('ef', fname)
-
- # Converts an event context field name to a C parameter name.
- #
- # fname: field name
- def _ec_fname_to_pname(self, fname):
- return self._fname_to_pname('ec', fname)
-
- # Converts a stream event context field name to a C parameter name.
- #
- # fname: field name
- def _sec_fname_to_pname(self, fname):
- return self._fname_to_pname('sec', fname)
-
- # Converts an event header field name to a C parameter name.
- #
- # fname: field name
- def _eh_fname_to_pname(self, fname):
- return self._fname_to_pname('eh', fname)
-
- # Converts a stream packet context field name to a C parameter name.
- #
- # fname: field name
- def _spc_fname_to_pname(self, fname):
- return self._fname_to_pname('spc', fname)
-
- # Converts a trace packet header field name to a C parameter name.
- #
- # fname: field name
- def _tph_fname_to_pname(self, fname):
- return self._fname_to_pname('tph', fname)
-
- # Returns the equivalent C type of a TSDL integer.
- #
- # integer: TSDL integer of which to get the equivalent C type
- def _get_integer_param_ctype(self, integer):
- signed = 'u' if not integer.signed else ''
-
- if integer.size <= 8:
- sz = '8'
- elif integer.size <= 16:
- sz = '16'
- elif integer.size <= 32:
- sz = '32'
- elif integer.size == 64:
- sz = '64'
-
- return '{}int{}_t'.format(signed, sz)
-
- # Returns the equivalent C type of a TSDL enumeration.
- #
- # enum: TSDL enumeration of which to get the equivalent C type
- def _get_enum_param_ctype(self, enum):
- return self._get_obj_param_ctype(enum.integer)
-
- # Returns the equivalent C type of a TSDL floating point number.
- #
- # fp: TSDL floating point number of which to get the equivalent C type
- def _get_floating_point_param_ctype(self, fp):
- if fp.exp_dig == 8 and fp.mant_dig == 24 and fp.align == 32:
- return 'float'
- elif fp.exp_dig == 11 and fp.mant_dig == 53 and fp.align == 64:
- return 'double'
- else:
- return 'uint64_t'
-
- # Returns the equivalent C type of a TSDL type.
- #
- # obj: TSDL type of which to get the equivalent C type
- def _get_obj_param_ctype(self, obj):
- return self._obj_param_ctype_cb[type(obj)](obj)
-
- # Returns the check offset overflow macro call string for a given size.
- #
- # size: size to check
- def _get_chk_offset_v(self, size):
- fmt = '{}_CHK_OFFSET_V({}, {}, {});'
- ret = fmt.format(self._prefix.upper(), self._CTX_AT,
- self._CTX_PACKET_SIZE, size)
-
- return ret
-
- # Returns the check offset overflow macro call C line for a given size.
- #
- # size: size to check
- def _get_chk_offset_v_cline(self, size):
- return _CLine(self._get_chk_offset_v(size))
-
- # Returns the offset alignment macro call string for a given alignment.
- #
- # size: new alignment
- def _get_align_offset(self, align, at=None):
- if at is None:
- at = self._CTX_AT
-
- fmt = '{}_ALIGN_OFFSET({}, {});'
- ret = fmt.format(self._prefix.upper(), at, align)
-
- return ret
-
- # Returns the offset alignment macro call C line for a given alignment.
- #
- # size: new alignment
- def _get_align_offset_cline(self, size):
- return _CLine(self._get_align_offset(size))
-
- # Converts a C source string with newlines to an array of C lines and
- # returns it.
- #
- # s: C source string
- def _str_to_clines(self, s):
- lines = s.split('\n')
-
- return [_CLine(line) for line in lines]
-
- # Fills a given template with values and returns its C lines. The `prefix`
- # and `ucprefix` template variable are automatically provided using the
- # generator's context.
- #
- # tmpl: template
- # kwargs: additional template variable values
- def _template_to_clines(self, tmpl, **kwargs):
- s = tmpl.format(prefix=self._prefix, ucprefix=self._prefix.upper(),
- **kwargs)
-
- return self._str_to_clines(s)
-
- # Returns the C lines for writing a TSDL structure field.
- #
- # fname: field name
- # src_name: C source pointer
- # struct: TSDL structure
- def _write_field_struct(self, fname, src_name, struct, scope_prefix=None):
- size = self._get_struct_size(struct)
- size_bytes = self._get_alignment(size, 8) // 8
- dst = self._CTX_BUF_AT_ADDR
-
- return [
- # memcpy() is safe since barectf requires inner structures
- # to be byte-aligned
- self._get_chk_offset_v_cline(size),
- _CLine('memcpy({}, {}, {});'.format(dst, src_name, size_bytes)),
- _CLine('{} += {};'.format(self._CTX_AT, size)),
- ]
-
- # Returns the C lines for writing a TSDL integer field.
- #
- # fname: field name
- # src_name: C source integer
- # integer: TSDL integer
- def _write_field_integer(self, fname, src_name, integer, scope_prefix=None):
- bo = self._BO_SUFFIXES_MAP[integer.byte_order]
- length = self._get_obj_size(integer)
- signed = 'signed' if integer.signed else 'unsigned'
-
- return self._template_to_clines(barectf.templates.WRITE_INTEGER,
- signed=signed, sz=length, bo=bo,
- src_name=src_name)
-
- # Returns the C lines for writing a TSDL enumeration field.
- #
- # fname: field name
- # src_name: C source integer
- # enum: TSDL enumeration
- def _write_field_enum(self, fname, src_name, enum, scope_prefix=None):
- return self._write_field_obj(fname, src_name, enum.integer,
- scope_prefix)
-
- # Returns the C lines for writing a TSDL floating point number field.
- #
- # fname: field name
- # src_name: C source pointer
- # floating_point: TSDL floating point number
- def _write_field_floating_point(self, fname, src_name, floating_point,
- scope_prefix=None):
- bo = self._BO_SUFFIXES_MAP[floating_point.byte_order]
- t = self._get_obj_param_ctype(floating_point)
- length = self._get_obj_size(floating_point)
-
- if t == 'float':
- t = 'uint32_t'
- elif t == 'double':
- t = 'uint64_t'
-
- src_name_casted = '*(({}*) &{})'.format(t, src_name)
-
- return self._template_to_clines(barectf.templates.WRITE_INTEGER,
- signed='unsigned', sz=length, bo=bo,
- src_name=src_name_casted)
-
- # Returns the C lines for writing either a TSDL array field or a
- # TSDL sequence field.
- #
- # fname: field name
- # src_name: C source pointer
- # arrayseq: TSDL array or sequence
- # scope_prefix: preferred scope prefix
- def _write_field_array_sequence(self, fname, src_name, arrayseq,
- scope_prefix):
- def length_index_varname(index):
- return 'lens_{}_{}'.format(fname, index)
-
- # first pass: find all lengths to multiply
- mulops = []
- done = False
-
- while not done:
- mulops.append(arrayseq.length)
- element = arrayseq.element
- tel = type(element)
-
- if tel is pytsdl.tsdl.Array or tel is pytsdl.tsdl.Sequence:
- # another array/sequence; continue
- arrayseq = element
- continue
-
- # found the end
- done = True
-
- # align the size of the repeating element (effective repeating size)
- el_size = self._get_obj_size(element)
- el_align = self._get_obj_alignment(element)
- el_size = self._get_alignment(el_size, el_align)
-
- # this effective size is part of the operands to multiply
- mulops.append(el_size)
-
- # clines
- clines = []
-
- # fetch and save sequence lengths
- emulops = []
-
- for i in range(len(mulops)):
- mulop = mulops[i]
-
- if type(mulop) is list:
- # offset variable to fetch
- offvar = self._get_seq_length_src_name(mulop, scope_prefix)
-
- if type(offvar) is int:
- # environment constant
- emulops.append(str(offvar))
- continue
-
- # save buffer position
- line = 'ctx_at_bkup = {};'.format(self._CTX_AT)
- clines.append(_CLine(line))
-
- # go back to field offset
- line = '{} = {};'.format(self._CTX_AT, offvar)
- clines.append(_CLine(line))
-
- # read value into specific variable
- varname = length_index_varname(i)
- emulops.append(varname)
- varctype = 'uint32_t'
- fmt = '{ctype} {cname} = *(({ctype}*) ({ctxbufataddr}));'
- line = fmt.format(ctype=varctype, cname=varname,
- ctxbufataddr=self._CTX_BUF_AT_ADDR)
- clines.append(_CLine(line))
-
- # restore buffer position
- line = '{} = ctx_at_bkup;'.format(self._CTX_AT)
- clines.append(_CLine(line))
- else:
- emulops.append(str(mulop))
-
- # write product of sizes in bits
- mul = ' * '.join(emulops)
- sz_bits_varname = 'sz_bits_{}'.format(fname)
- sz_bytes_varname = 'sz_bytes_{}'.format(fname)
- line = 'uint32_t {} = {};'.format(sz_bits_varname, mul)
- clines.append(_CLine(line))
-
- # check overflow
- clines.append(self._get_chk_offset_v_cline(sz_bits_varname))
-
- # write product of sizes in bytes
- line = 'uint32_t {} = {};'.format(sz_bytes_varname, sz_bits_varname)
- clines.append(_CLine(line))
- line = self._get_align_offset(8, at=sz_bytes_varname)
- clines.append(_CLine(line))
- line = '{} >>= 3;'.format(sz_bytes_varname)
- clines.append(_CLine(line))
-
- # memcpy()
- dst = self._CTX_BUF_AT_ADDR
- line = 'memcpy({}, {}, {});'.format(dst, src_name, sz_bytes_varname)
- clines.append(_CLine(line))
- line = '{} += {};'.format(self._CTX_AT, sz_bits_varname)
- clines.append(_CLine(line))
-
- return clines
-
- # Returns the C lines for writing a TSDL array field.
- #
- # fname: field name
- # src_name: C source pointer
- # array: TSDL array
- # scope_prefix: preferred scope prefix
- def _write_field_array(self, fname, src_name, array, scope_prefix=None):
- return self._write_field_array_sequence(fname, src_name, array,
- scope_prefix)
-
- # Returns the C lines for writing a TSDL sequence field.
- #
- # fname: field name
- # src_name: C source pointer
- # sequence: TSDL sequence
- # scope_prefix: preferred scope prefix
- def _write_field_sequence(self, fname, src_name, sequence, scope_prefix):
- return self._write_field_array_sequence(fname, src_name, sequence,
- scope_prefix)
-
- # Returns a trace packet header C source name out of a sequence length
- # expression.
- #
- # length: sequence length expression
- def _get_tph_src_name(self, length):
- offvar = self._get_offvar_name_from_expr(length[3:], 'tph')
-
- return 'ctx->{}'.format(offvar)
-
- # Returns an environment C source name out of a sequence length
- # expression.
- #
- # length: sequence length expression
- def _get_env_src_name(self, length):
- if len(length) != 2:
- _perror('invalid sequence length: "{}"'.format(self._dot_name_to_str(length)))
-
- fname = length[1]
-
- if fname not in self._doc.env:
- _perror('cannot find field env.{}'.format(fname))
-
- env_length = self._doc.env[fname]
-
- if type(env_length) is not int:
- _perror('env.{} is not a constant integer'.format(fname))
-
- return self._doc.env[fname]
-
- # Returns a stream packet context C source name out of a sequence length
- # expression.
- #
- # length: sequence length expression
- def _get_spc_src_name(self, length):
- offvar = self._get_offvar_name_from_expr(length[3:], 'spc')
-
- return 'ctx->{}'.format(offvar)
-
- # Returns a stream event header C source name out of a sequence length
- # expression.
- #
- # length: sequence length expression
- def _get_seh_src_name(self, length):
- return self._get_offvar_name_from_expr(length[3:], 'seh')
-
- # Returns a stream event context C source name out of a sequence length
- # expression.
- #
- # length: sequence length expression
- def _get_sec_src_name(self, length):
- return self._get_offvar_name_from_expr(length[3:], 'sec')
-
- # Returns an event context C source name out of a sequence length
- # expression.
- #
- # length: sequence length expression
- def _get_ec_src_name(self, length):
- return self._get_offvar_name_from_expr(length[2:], 'ec')
-
- # Returns an event fields C source name out of a sequence length
- # expression.
- #
- # length: sequence length expression
- def _get_ef_src_name(self, length):
- return self._get_offvar_name_from_expr(length[2:], 'ef')
-
- # Returns a C source name out of a sequence length expression.
- #
- # length: sequence length expression
- # scope_prefix: preferred scope prefix
- def _get_seq_length_src_name(self, length, scope_prefix=None):
- length_dot = self._dot_name_to_str(length)
-
- for prefix, get_src_name in self._get_src_name_funcs.items():
- if length_dot.startswith(prefix):
- return get_src_name(length)
-
- return self._get_offvar_name_from_expr(length, scope_prefix)
-
- # Returns the C lines for writing a TSDL string field.
- #
- # fname: field name
- # src_name: C source pointer
- # string: TSDL string
- def _write_field_string(self, fname, src_name, string, scope_prefix=None):
- clines = []
-
- # get string length
- sz_bytes_varname = 'slen_bytes_{}'.format(fname)
- line = 'size_t {} = strlen({}) + 1;'.format(sz_bytes_varname, src_name)
- clines.append(_CLine(line))
-
- # check offset overflow
- sz_bits_varname = 'slen_bits_{}'.format(fname)
- line = 'size_t {} = ({} << 3);'.format(sz_bits_varname,
- sz_bytes_varname)
- clines.append(_CLine(line))
- cline = self._get_chk_offset_v_cline(sz_bits_varname)
- clines.append(cline)
-
- # memcpy()
- dst = self._CTX_BUF_AT_ADDR
- line = 'memcpy({}, {}, {});'.format(dst, src_name, sz_bytes_varname)
- clines.append(_CLine(line))
-
- # update bit position
- line = '{} += {};'.format(self._CTX_AT, sz_bits_varname)
- clines.append(_CLine(line))
-
- return clines
-
- # Returns the C lines for writing a TSDL type field.
- #
- # fname: field name
- # src_name: C source pointer
- # ftype: TSDL type
- # scope_prefix: preferred scope prefix
- def _write_field_obj(self, fname, src_name, ftype, scope_prefix):
- return self._write_field_obj_cb[type(ftype)](fname, src_name, ftype,
- scope_prefix)
+def _write_file(d, name, content):
+ with open(os.path.join(d, name), 'w') as f:
+ f.write(content)
- # Returns an offset variable name out of an offset name.
- #
- # name: offset name
- # prefix: offset variable name prefix
- def _get_offvar_name(self, name, prefix=None):
- parts = ['off']
- if prefix is not None:
- parts.append(prefix)
-
- parts.append(name)
-
- return '_'.join(parts)
-
- # Returns an offset variable name out of an expression (array of
- # strings).
- #
- # expr: array of strings
- # prefix: offset variable name prefix
- def _get_offvar_name_from_expr(self, expr, prefix=None):
- return self._get_offvar_name('_'.join(expr), prefix)
-
- # Returns the C lines for writing a TSDL field.
- #
- # fname: field name
- # ftype: TSDL field type
- # scope_name: scope name
- # scope_prefix: preferred scope prefix
- # param_name_cb: callback to get the C parameter name out of the
- # field name
- def _field_to_clines(self, fname, ftype, scope_name, scope_prefix,
- param_name_cb):
- clines = []
- pname = param_name_cb(fname)
- align = self._get_obj_alignment(ftype)
-
- # group comment
- fmt = '/* write {}.{} ({}) */'
- line = fmt.format(scope_name, fname,
- self._TSDL_TYPE_NAMES_MAP[type(ftype)])
- clines.append(_CLine(line))
-
- # align bit index before writing to the buffer
- cline = self._get_align_offset_cline(align)
- clines.append(cline)
-
- # write offset variables
- if type(ftype) is pytsdl.tsdl.Struct:
- offvars_tree = collections.OrderedDict()
- self._get_struct_size(ftype, offvars_tree)
- offvars = self._flatten_offvars_tree(offvars_tree)
-
- # as many offset as there are child fields because a future
- # sequence could refer to any of those fields
- for lname, offset in offvars.items():
- offvar = self._get_offvar_name('_'.join([fname, lname]),
- scope_prefix)
- fmt = 'uint32_t {} = (uint32_t) {} + {};'
- line = fmt.format(offvar, self._CTX_AT, offset);
- clines.append(_CLine(line))
- elif type(ftype) is pytsdl.tsdl.Integer:
- # offset of this simple field is the current bit index
- offvar = self._get_offvar_name(fname, scope_prefix)
- line = 'uint32_t {} = (uint32_t) {};'.format(offvar, self._CTX_AT)
- clines.append(_CLine(line))
-
- clines += self._write_field_obj(fname, pname, ftype, scope_prefix)
-
- return clines
-
- # Joins C line groups and returns C lines.
- #
- # cline_groups: C line groups to join
- def _join_cline_groups(self, cline_groups):
- if not cline_groups:
- return cline_groups
-
- output_clines = cline_groups[0]
-
- for clines in cline_groups[1:]:
- output_clines.append('')
- output_clines += clines
-
- return output_clines
-
- # Returns the C lines for writing a complete TSDL structure (top level
- # scope).
- #
- # struct: TSDL structure
- # scope_name: scope name
- # scope_prefix: preferred scope prefix
- # param_name_cb: callback to get the C parameter name out of the
- # field name
- def _struct_to_clines(self, struct, scope_name, scope_prefix,
- param_name_cb):
- cline_groups = []
-
- for fname, ftype in struct.fields.items():
- clines = self._field_to_clines(fname, ftype, scope_name,
- scope_prefix, param_name_cb)
- cline_groups.append(clines)
-
- return self._join_cline_groups(cline_groups)
-
- # Returns the offset variables of a TSDL structure.
- #
- # struct: TSDL structure
- def _get_struct_size_offvars(self, struct):
- offvars_tree = collections.OrderedDict()
- size = self._get_struct_size(struct, offvars_tree)
- offvars = self._flatten_offvars_tree(offvars_tree)
-
- return size, offvars
-
- # Returns the size and offset variables of the current trace packet header.
- def _get_tph_size_offvars(self):
- return self._get_struct_size_offvars(self._doc.trace.packet_header)
-
- # Returns the size and offset variables of the a stream packet context.
- #
- # stream: TSDL stream
- def _get_spc_size_offvars(self, stream):
- return self._get_struct_size_offvars(stream.packet_context)
-
- # Returns the C lines for the barectf context C structure entries for
- # offsets.
- #
- # prefix: offset variable names prefix
- # offvars: offset variables
- def _offvars_to_ctx_clines(self, prefix, offvars):
- clines = []
-
- for name in offvars.keys():
- offvar = self._get_offvar_name(name, prefix)
- clines.append(_CLine('uint32_t {};'.format(offvar)))
-
- return clines
-
- # Generates a barectf context C structure.
- #
- # stream: TSDL stream
- # hide_sid: True to hide the stream ID
- def _gen_barectf_ctx_struct(self, stream, hide_sid=False):
- # get offset variables for both the packet header and packet context
- tph_size, tph_offvars = self._get_tph_size_offvars()
- spc_size, spc_offvars = self._get_spc_size_offvars(stream)
- clines = self._offvars_to_ctx_clines('tph', tph_offvars)
- clines += self._offvars_to_ctx_clines('spc', spc_offvars)
-
- # indent C
- clines_indented = []
- for cline in clines:
- clines_indented.append(_CLine('\t' + cline))
-
- # clock callback
- clock_cb = '\t/* (no clock callback) */'
-
- if not self._manual_clock:
- ctype = self._get_clock_ctype(stream)
- fmt = '\t{} (*clock_cb)(void*);\n\tvoid* clock_cb_data;'
- clock_cb = fmt.format(ctype)
-
- # fill template
- sid = ''
-
- if not hide_sid:
- sid = stream.id
-
- t = barectf.templates.BARECTF_CTX
- struct = t.format(prefix=self._prefix, sid=sid,
- ctx_fields='\n'.join(clines_indented),
- clock_cb=clock_cb)
-
- return struct
-
- # Generates all barectf context C structures.
- def _gen_barectf_contexts_struct(self):
- hide_sid = False
-
- if len(self._doc.streams) == 1:
- hide_sid = True
-
- structs = []
-
- for stream in self._doc.streams.values():
- struct = self._gen_barectf_ctx_struct(stream, hide_sid)
- structs.append(struct)
-
- return '\n\n'.join(structs)
-
- # Returns the C type of the clock used by the event header of a
- # TSDL stream.
- #
- # stream: TSDL stream containing the event header to inspect
- def _get_clock_ctype(self, stream):
- return self._get_obj_param_ctype(stream.event_header['timestamp'])
-
- # Generates the manual clock value C parameter for a given stream.
- #
- # stream: TSDL stream
- def _gen_manual_clock_param(self, stream):
- return '{} param_clock'.format(self._get_clock_ctype(stream))
-
- # Generates the body of a barectf_open() function.
- #
- # stream: TSDL stream
- def _gen_barectf_func_open_body(self, stream):
- clines = []
-
- # keep clock value (for timestamp_begin)
- if self._stream_has_timestamp_begin_end(stream):
- # get clock value ASAP
- clk_type = self._get_clock_ctype(stream)
- clk = self._gen_get_clock_value()
- line = '{} clk_value = {};'.format(clk_type, clk)
- clines.append(_CLine(line))
- clines.append(_CLine(''))
-
- # reset bit position to write the packet context (after packet header)
- spc_offset = self._get_stream_packet_context_offset(stream)
- fmt = '{} = {};'
- line = fmt.format(self._CTX_AT, spc_offset)
- clines.append(_CLine(line))
-
- # bit position at beginning of event (to reset in case we run
- # out of space)
- line = 'uint32_t ctx_at_begin = {};'.format(self._CTX_AT)
- clines.append(_CLine(line))
- clines.append(_CLine(''))
-
- # packet context fields
- fcline_groups = []
- scope_name = 'stream.packet.context'
- scope_prefix = 'spc'
-
- for fname, ftype in stream.packet_context.fields.items():
- # packet size
- if fname == 'packet_size':
- fclines = self._field_to_clines(fname, ftype, scope_name,
- scope_prefix,
- lambda x: 'ctx->packet_size')
- fcline_groups.append(fclines)
-
- # content size (skip)
- elif fname == 'content_size':
- fclines = self._field_to_clines(fname, ftype, scope_name,
- scope_prefix, lambda x: '0')
- fcline_groups.append(fclines)
-
- # events discarded (skip)
- elif fname == 'events_discarded':
- fclines = self._field_to_clines(fname, ftype, scope_name,
- scope_prefix, lambda x: '0')
- fcline_groups.append(fclines)
-
- # timestamp_begin
- elif fname == 'timestamp_begin':
- fclines = self._field_to_clines(fname, ftype, scope_name,
- scope_prefix,
- lambda x: 'clk_value')
- fcline_groups.append(fclines)
-
- # timestamp_end (skip)
- elif fname == 'timestamp_end':
- fclines = self._field_to_clines(fname, ftype, scope_name,
- scope_prefix, lambda x: '0')
- fcline_groups.append(fclines)
-
- # anything else
- else:
- fclines = self._field_to_clines(fname, ftype, scope_name,
- scope_prefix,
- self._spc_fname_to_pname)
- fcline_groups.append(fclines)
-
- # return 0
- fcline_groups.append([_CLine('return 0;')])
-
- clines += self._join_cline_groups(fcline_groups)
-
- # get source
- cblock = _CBlock(clines)
- src = self._cblock_to_source(cblock)
-
- return src
-
- _SPC_KNOWN_FIELDS = [
- 'content_size',
- 'packet_size',
- 'timestamp_begin',
- 'timestamp_end',
- 'events_discarded',
- ]
-
- # Generates a barectf_open() function.
- #
- # stream: TSDL stream
- # gen_body: also generate function body
- # hide_sid: True to hide the stream ID
- def _gen_barectf_func_open(self, stream, gen_body, hide_sid=False):
- params = []
-
- # manual clock
- if self._manual_clock:
- clock_param = self._gen_manual_clock_param(stream)
- params.append(clock_param)
-
- # packet context
- for fname, ftype in stream.packet_context.fields.items():
- if fname in self._SPC_KNOWN_FIELDS:
- continue
-
- ptype = self._get_obj_param_ctype(ftype)
- pname = self._spc_fname_to_pname(fname)
- param = '{} {}'.format(ptype, pname)
- params.append(param)
-
- params_str = ''
-
- if params:
- params_str = ',\n\t'.join([''] + params)
-
- # fill template
- sid = ''
-
- if not hide_sid:
- sid = stream.id
-
- t = barectf.templates.FUNC_OPEN
- func = t.format(si=self._si_str, prefix=self._prefix, sid=sid,
- params=params_str)
-
- if gen_body:
- func += '\n{\n'
- func += self._gen_barectf_func_open_body(stream)
- func += '\n}'
- else:
- func += ';'
-
- return func
-
- # Generates the body of a barectf_init() function.
- #
- # stream: TSDL stream
- def _gen_barectf_func_init_body(self, stream):
- clines = []
-
- line = 'uint32_t ctx_at_bkup;'
- clines.append(_CLine(line))
-
- # bit position at beginning of event (to reset in case we run
- # out of space)
- line = 'uint32_t ctx_at_begin = {};'.format(self._CTX_AT)
- clines.append(_CLine(line))
- clines.append(_CLine(''))
-
- # set context parameters
- clines.append(_CLine("/* barectf context parameters */"))
- clines.append(_CLine('ctx->buf = buf;'))
- clines.append(_CLine('ctx->packet_size = buf_size * 8;'))
- clines.append(_CLine('{} = 0;'.format(self._CTX_AT)))
-
- if not self._manual_clock:
- clines.append(_CLine('ctx->clock_cb = clock_cb;'))
- clines.append(_CLine('ctx->clock_cb_data = clock_cb_data;'))
-
- # set context offsets
- clines.append(_CLine(''))
- clines.append(_CLine("/* barectf context offsets */"))
- ph_size, ph_offvars = self._get_tph_size_offvars()
- pc_size, pc_offvars = self._get_spc_size_offvars(stream)
- pc_alignment = self._get_obj_alignment(stream.packet_context)
- pc_offset = self._get_alignment(ph_size, pc_alignment)
-
- for offvar, offset in ph_offvars.items():
- offvar_field = self._get_offvar_name(offvar, 'tph')
- line = 'ctx->{} = {};'.format(offvar_field, offset)
- clines.append(_CLine(line))
-
- for offvar, offset in pc_offvars.items():
- offvar_field = self._get_offvar_name(offvar, 'spc')
- line = 'ctx->{} = {};'.format(offvar_field, pc_offset + offset)
- clines.append(_CLine(line))
-
- clines.append(_CLine(''))
-
- # packet header fields
- fcline_groups = []
- scope_name = 'trace.packet.header'
- scope_prefix = 'tph'
-
- for fname, ftype in self._doc.trace.packet_header.fields.items():
- # magic number
- if fname == 'magic':
- fclines = self._field_to_clines(fname, ftype, scope_name,
- scope_prefix,
- lambda x: '0xc1fc1fc1UL')
- fcline_groups.append(fclines)
-
- # stream ID
- elif fname == 'stream_id':
- fclines = self._field_to_clines(fname, ftype, scope_name,
- scope_prefix,
- lambda x: str(stream.id))
- fcline_groups.append(fclines)
-
- # return 0
- fcline_groups.append([_CLine('return 0;')])
-
- clines += self._join_cline_groups(fcline_groups)
-
- # get source
- cblock = _CBlock(clines)
- src = self._cblock_to_source(cblock)
-
- return src
-
- # Generates a barectf_init() function.
- #
- # stream: TSDL stream
- # gen_body: also generate function body
- # hide_sid: True to hide the stream ID
- def _gen_barectf_func_init(self, stream, gen_body, hide_sid=False):
- # fill template
- sid = ''
-
- if not hide_sid:
- sid = stream.id
-
- params = ''
-
- if not self._manual_clock:
- ts_ftype = stream.event_header['timestamp']
- ts_ptype = self._get_obj_param_ctype(ts_ftype)
- fmt = ',\n\t{} (*clock_cb)(void*),\n\tvoid* clock_cb_data'
- params = fmt.format(ts_ptype)
-
- t = barectf.templates.FUNC_INIT
- func = t.format(si=self._si_str, prefix=self._prefix, sid=sid,
- params=params)
-
- if gen_body:
- func += '\n{\n'
- func += self._gen_barectf_func_init_body(stream)
- func += '\n}'
- else:
- func += ';'
-
- return func
-
- # Generates the C expression to get the clock value depending on
- # whether we're in manual clock mode or not.
- def _gen_get_clock_value(self):
- if self._manual_clock:
- return 'param_clock'
- else:
- return self._CTX_CALL_CLOCK_CB
-
- # Returns True if the given TSDL stream has timestamp_begin and
- # timestamp_end fields.
- #
- # stream: TSDL stream to check
- def _stream_has_timestamp_begin_end(self, stream):
- return self._has_timestamp_begin_end[stream.id]
-
- # Returns the packet context offset (from the beginning of the
- # packet) of a given TSDL stream
- #
- # stream: TSDL stream
- def _get_stream_packet_context_offset(self, stream):
- return self._packet_context_offsets[stream.id]
-
- # Generates the C lines to write a barectf context field, saving
- # and restoring the current bit position accordingly.
- #
- # src_name: C source name
- # prefix: offset variable prefix
- # name: offset variable name
- # integer: TSDL integer to write
- def _gen_write_ctx_field_integer(self, src_name, prefix, name, integer):
- clines = []
-
- # save buffer position
- line = 'ctx_at_bkup = {};'.format(self._CTX_AT)
- clines.append(_CLine(line))
-
- # go back to field offset
- offvar = self._get_offvar_name(name, prefix)
- line = '{} = ctx->{};'.format(self._CTX_AT, offvar)
- clines.append(_CLine(line))
-
- # write value
- clines += self._write_field_integer(None, src_name, integer)
-
- # restore buffer position
- line = '{} = ctx_at_bkup;'.format(self._CTX_AT)
- clines.append(_CLine(line))
-
- return clines
-
- # Generates the body of a barectf_close() function.
- #
- # stream: TSDL stream
- def _gen_barectf_func_close_body(self, stream):
- clines = []
-
- line = 'uint32_t ctx_at_bkup;'
- clines.append(_CLine(line))
-
- # bit position at beginning of event (to reset in case we run
- # out of space)
- line = 'uint32_t ctx_at_begin = {};'.format(self._CTX_AT)
- clines.append(_CLine(line))
-
- # update timestamp end if present
- if self._stream_has_timestamp_begin_end(stream):
- clines.append(_CLine(''))
- clines.append(_CLine("/* update packet context's timestamp_end */"))
-
- # get clock value ASAP
- clk_type = self._get_clock_ctype(stream)
- clk = self._gen_get_clock_value()
- line = '{} clk_value = {};'.format(clk_type, clk)
- clines.append(_CLine(line))
-
- # write timestamp_end
- timestamp_end_integer = stream.packet_context['timestamp_end']
- clines += self._gen_write_ctx_field_integer('clk_value', 'spc',
- 'timestamp_end',
- timestamp_end_integer)
-
- # update content_size
- clines.append(_CLine(''))
- clines.append(_CLine("/* update packet context's content_size */"))
- content_size_integer = stream.packet_context['content_size']
- clines += self._gen_write_ctx_field_integer('ctx_at_bkup', 'spc',
- 'content_size',
- content_size_integer)
-
- # set events_discarded
- if 'events_discarded' in stream.packet_context.fields:
- # events_discarded parameter name (provided by user)
- pname = self._spc_fname_to_pname('events_discarded')
-
- # save buffer position
- clines.append(_CLine(''))
- line = 'ctx_at_bkup = {};'.format(self._CTX_AT)
- clines.append(_CLine(line))
-
- # go back to field offset
- offvar = self._get_offvar_name('events_discarded', 'spc')
- line = '{} = ctx->{};'.format(self._CTX_AT, offvar)
- clines.append(_CLine(line))
-
- # write value
- integer = stream.packet_context['events_discarded']
- clines += self._write_field_integer(None, pname, integer)
-
- # restore buffer position
- line = '{} = ctx_at_bkup;'.format(self._CTX_AT)
- clines.append(_CLine(line))
-
- # return 0
- clines.append(_CLine('\n'))
- clines.append(_CLine('return 0;'))
-
- # get source
- cblock = _CBlock(clines)
- src = self._cblock_to_source(cblock)
-
- return src
-
- # Generates a barectf_close() function.
- #
- # stream: TSDL stream
- # gen_body: also generate function body
- # hide_sid: True to hide the stream ID
- def _gen_barectf_func_close(self, stream, gen_body, hide_sid=False):
- # fill template
- sid = ''
-
- if not hide_sid:
- sid = stream.id
-
- params = ''
-
- if self._manual_clock:
- clock_param = self._gen_manual_clock_param(stream)
- params = ',\n\t{}'.format(clock_param)
-
- if 'events_discarded' in stream.packet_context.fields:
- ftype = stream.packet_context['events_discarded']
- ptype = self._get_obj_param_ctype(ftype)
- pname = self._spc_fname_to_pname('events_discarded')
- params += ',\n\t{} {}'.format(ptype, pname)
-
- t = barectf.templates.FUNC_CLOSE
- func = t.format(si=self._si_str, prefix=self._prefix, sid=sid,
- params=params)
-
- if gen_body:
- func += '\n{\n'
- func += self._gen_barectf_func_close_body(stream)
- func += '\n}'
- else:
- func += ';'
-
- return func
-
- # Generates all barectf_init() function.
- #
- # gen_body: also generate function bodies
- def _gen_barectf_funcs_init(self, gen_body):
- hide_sid = False
-
- if len(self._doc.streams) == 1:
- hide_sid = True
-
- funcs = []
-
- for stream in self._doc.streams.values():
- funcs.append(self._gen_barectf_func_init(stream, gen_body,
- hide_sid))
-
- return funcs
-
- # Generates all barectf_open() function.
- #
- # gen_body: also generate function bodies
- def _gen_barectf_funcs_open(self, gen_body):
- hide_sid = False
-
- if len(self._doc.streams) == 1:
- hide_sid = True
-
- funcs = []
-
- for stream in self._doc.streams.values():
- funcs.append(self._gen_barectf_func_open(stream, gen_body,
- hide_sid))
-
- return funcs
-
- # Generates the body of a barectf_trace() function.
- #
- # stream: TSDL stream of TSDL event to trace
- # event: TSDL event to trace
- def _gen_barectf_func_trace_event_body(self, stream, event):
- clines = []
-
- # get clock value ASAP
- clk_type = self._get_clock_ctype(stream)
- clk = self._gen_get_clock_value()
- line = '{} clk_value = {};'.format(clk_type, clk)
- clines.append(_CLine(line))
- clines.append(_CLine(''))
-
- # bit position backup (could be used)
- clines.append(_CLine('uint32_t ctx_at_bkup;'))
-
- # bit position at beginning of event (to reset in case we run
- # out of space)
- line = 'uint32_t ctx_at_begin = {};'.format(self._CTX_AT)
- clines.append(_CLine(line))
- clines.append(_CLine(''))
-
- # event header
- fcline_groups = []
- scope_name = 'event.header'
- scope_prefix = 'eh'
-
- for fname, ftype in stream.event_header.fields.items():
- # id
- if fname == 'id':
- fclines = self._field_to_clines(fname, ftype, scope_name,
- scope_prefix,
- lambda x: str(event.id))
- fcline_groups.append(fclines)
-
- # timestamp
- elif fname == 'timestamp':
- fclines = self._field_to_clines(fname, ftype, scope_name,
- scope_prefix,
- lambda x: 'clk_value')
- fcline_groups.append(fclines)
-
- # stream event context
- if stream.event_context is not None:
- fclines = self._struct_to_clines(stream.event_context,
- 'stream.event.context', 'sec',
- self._sec_fname_to_pname)
- fcline_groups.append(fclines)
-
- # event context
- if event.context is not None:
- fclines = self._struct_to_clines(event.context,
- 'event.context', 'ec',
- self._ec_fname_to_pname)
- fcline_groups.append(fclines)
-
- # event fields
- if event.fields is not None:
- fclines = self._struct_to_clines(event.fields,
- 'event.fields', 'ef',
- self._ef_fname_to_pname)
- fcline_groups.append(fclines)
-
- # return 0
- fcline_groups.append([_CLine('return 0;')])
-
- clines += self._join_cline_groups(fcline_groups)
-
- # get source
- cblock = _CBlock(clines)
- src = self._cblock_to_source(cblock)
-
- return src
-
- # Generates a barectf_trace() function.
- #
- # stream: TSDL stream containing the TSDL event to trace
- # event: TSDL event to trace
- # gen_body: also generate function body
- # hide_sid: True to hide the stream ID
- def _gen_barectf_func_trace_event(self, stream, event, gen_body, hide_sid):
- params = []
-
- # manual clock
- if self._manual_clock:
- clock_param = self._gen_manual_clock_param(stream)
- params.append(clock_param)
-
- # stream event context params
- if stream.event_context is not None:
- for fname, ftype in stream.event_context.fields.items():
- ptype = self._get_obj_param_ctype(ftype)
- pname = self._sec_fname_to_pname(fname)
- param = '{} {}'.format(ptype, pname)
- params.append(param)
-
- # event context params
- if event.context is not None:
- for fname, ftype in event.context.fields.items():
- ptype = self._get_obj_param_ctype(ftype)
- pname = self._ec_fname_to_pname(fname)
- param = '{} {}'.format(ptype, pname)
- params.append(param)
-
- # event fields params
- if event.fields is not None:
- for fname, ftype in event.fields.fields.items():
- ptype = self._get_obj_param_ctype(ftype)
- pname = self._ef_fname_to_pname(fname)
- param = '{} {}'.format(ptype, pname)
- params.append(param)
-
- params_str = ''
-
- if params:
- params_str = ',\n\t'.join([''] + params)
-
- # fill template
- sid = ''
-
- if not hide_sid:
- sid = stream.id
-
- t = barectf.templates.FUNC_TRACE
- func = t.format(si=self._si_str, prefix=self._prefix, sid=sid,
- evname=event.name, params=params_str)
-
- if gen_body:
- func += '\n{\n'
- func += self._gen_barectf_func_trace_event_body(stream, event)
- func += '\n}'
- else:
- func += ';'
-
- return func
-
- # Generates all barectf_trace() functions of a given TSDL stream.
- #
- # stream: TSDL stream containing the TSDL events to trace
- # gen_body: also generate function body
- # hide_sid: True to hide the stream ID
- def _gen_barectf_funcs_trace_stream(self, stream, gen_body, hide_sid):
- funcs = []
-
- for event in stream.events:
- funcs.append(self._gen_barectf_func_trace_event(stream, event,
- gen_body, hide_sid))
-
- return funcs
-
- # Generates all barectf_trace() function.
- #
- # gen_body: also generate function bodies
- def _gen_barectf_funcs_trace(self, gen_body):
- hide_sid = False
-
- if len(self._doc.streams) == 1:
- hide_sid = True
-
- funcs = []
-
- for stream in self._doc.streams.values():
- funcs += self._gen_barectf_funcs_trace_stream(stream, gen_body,
- hide_sid)
-
- return funcs
-
- # Generates all barectf_close() function.
- #
- # gen_body: also generate function bodies
- def _gen_barectf_funcs_close(self, gen_body):
- hide_sid = False
-
- if len(self._doc.streams) == 1:
- hide_sid = True
-
- funcs = []
-
- for stream in self._doc.streams.values():
- funcs.append(self._gen_barectf_func_close(stream, gen_body,
- hide_sid))
-
- return funcs
-
- # Generate all barectf functions
- #
- # gen_body: also generate function bodies
- def _gen_barectf_functions(self, gen_body):
- init_funcs = self._gen_barectf_funcs_init(gen_body)
- open_funcs = self._gen_barectf_funcs_open(gen_body)
- close_funcs = self._gen_barectf_funcs_close(gen_body)
- trace_funcs = self._gen_barectf_funcs_trace(gen_body)
-
- return init_funcs + open_funcs + close_funcs + trace_funcs
-
- # Generates the barectf header C source
- def _gen_barectf_header(self):
- ctx_structs = self._gen_barectf_contexts_struct()
- functions = self._gen_barectf_functions(self._static_inline)
- functions_str = '\n\n'.join(functions)
- t = barectf.templates.HEADER
- header = t.format(prefix=self._prefix, ucprefix=self._prefix.upper(),
- barectf_ctx=ctx_structs, functions=functions_str)
-
- return header
-
- _BO_DEF_MAP = {
- pytsdl.tsdl.ByteOrder.BE: 'BIG_ENDIAN',
- pytsdl.tsdl.ByteOrder.LE: 'LITTLE_ENDIAN',
- }
-
- # Generates the barectf bitfield.h header.
- def _gen_barectf_bitfield_header(self):
- header = barectf.templates.BITFIELD
- header = header.replace('$prefix$', self._prefix)
- header = header.replace('$PREFIX$', self._prefix.upper())
- endian_def = self._BO_DEF_MAP[self._doc.trace.byte_order]
- header = header.replace('$ENDIAN_DEF$', endian_def)
-
- return header
-
- # Generates the main barectf C source file.
- def _gen_barectf_csrc(self):
- functions = self._gen_barectf_functions(True)
- functions_str = '\n\n'.join(functions)
- t = barectf.templates.CSRC
- csrc = t.format(prefix=self._prefix, ucprefix=self._prefix.upper(),
- functions=functions_str)
-
- return csrc
-
- # Writes a file to the generator's output.
- #
- # name: file name
- # contents: file contents
- def _write_file(self, name, contents):
- path = os.path.join(self._output, name)
- try:
- with open(path, 'w') as f:
- f.write(contents)
- except Exception as e:
- _perror('cannot write "{}": {}'.format(path, e))
-
- # Converts a C block to actual C source lines.
- #
- # cblock: C block
- # indent: initial indentation
- def _cblock_to_source_lines(self, cblock, indent=1):
- src = []
- indentstr = '\t' * indent
-
- for line in cblock:
- if type(line) is _CBlock:
- src += self._cblock_to_source_lines(line, indent + 1)
- else:
- src.append(indentstr + line)
-
- return src
-
- # Converts a C block to an actual C source string.
- #
- # cblock: C block
- # indent: initial indentation
- def _cblock_to_source(self, cblock, indent=1):
- lines = self._cblock_to_source_lines(cblock, indent)
-
- return '\n'.join(lines)
-
- # Sets the generator parameters.
- def _set_params(self):
- # streams have timestamp_begin/timestamp_end fields
- self._has_timestamp_begin_end = {}
-
- for stream in self._doc.streams.values():
- has = 'timestamp_begin' in stream.packet_context.fields
- self._has_timestamp_begin_end[stream.id] = has
-
- # packet header size with alignment
- self._packet_context_offsets = {}
-
- tph_size = self._get_struct_size(self._doc.trace.packet_header)
-
- for stream in self._doc.streams.values():
- spc_alignment = self._get_obj_alignment(stream.packet_context)
- spc_offset = self._get_alignment(tph_size, spc_alignment)
- self._packet_context_offsets[stream.id] = spc_offset
-
- # Generates barectf C files.
- #
- # metadata: metadata path
- # output: output directory
- # prefix: prefix
- # static_inline: generate static inline functions
- # manual_clock: do not use a clock callback: pass clock value to
- # tracing functions
- def gen_barectf(self, metadata, output, prefix, static_inline,
- manual_clock):
- self._metadata = metadata
- self._output = output
- self._prefix = prefix
- self._static_inline = static_inline
- self._manual_clock = manual_clock
- self._si_str = ''
-
- if static_inline:
- self._si_str = 'static inline '
-
- # open CTF metadata file
- _pinfo('opening CTF metadata file "{}"'.format(self._metadata))
-
- try:
- with open(metadata) as f:
- self._tsdl = f.read()
- except:
- _perror('cannot open/read CTF metadata file "{}"'.format(metadata))
+def run():
+ # parse arguments
+ args = _parse_args()
- # parse CTF metadata
- _pinfo('parsing CTF metadata file')
+ # create configuration
+ try:
+ config = barectf.config.from_yaml_file(args.config)
+ except barectf.config.ConfigError as e:
+ _pconfig_error(e)
+ except Exception as e:
+ _perror('unknown exception: {}'.format(e))
+ # replace prefix if needed
+ if args.prefix:
try:
- self._doc = self._parser.parse(self._tsdl)
- except pytsdl.parser.ParseError as e:
- _perror('parse error: {}'.format(e))
-
- # validate CTF metadata against barectf constraints
- _pinfo('validating CTF metadata file')
- self._validate_metadata()
- _psuccess('CTF metadata file is valid')
-
- # set parameters for this generation
- self._set_params()
-
- # generate header
- _pinfo('generating barectf header files')
- header = self._gen_barectf_header()
- self._write_file('{}.h'.format(self._prefix), header)
- header = self._gen_barectf_bitfield_header()
- self._write_file('{}_bitfield.h'.format(self._prefix), header)
-
- # generate C source file
- if not self._static_inline:
- _pinfo('generating barectf C source file')
- csrc = self._gen_barectf_csrc()
- self._write_file('{}.c'.format(self._prefix), csrc)
-
- _psuccess('done')
-
-
-def run():
- args = _parse_args()
- generator = BarectfCodeGenerator()
- generator.gen_barectf(args.metadata, args.output, args.prefix,
- args.static_inline, args.manual_clock)
+ config.prefix = args.prefix
+ except barectf.config.ConfigError as e:
+ _pconfig_error(e)
+
+ # generate metadata
+ metadata = barectf.tsdl182gen.from_metadata(config.metadata)
+
+ try:
+ _write_file(args.metadata_dir, 'metadata', metadata)
+ except Exception as e:
+ _perror('cannot write metadata file: {}'.format(e))
+
+ # create generator
+ generator = barectf.gen.CCodeGenerator(config)
+
+ # generate C headers
+ header = generator.generate_header()
+ bitfield_header = generator.generate_bitfield_header()
+
+ try:
+ _write_file(args.headers_dir, generator.get_header_filename(), header)
+ _write_file(args.headers_dir, generator.get_bitfield_header_filename(),
+ bitfield_header)
+ except Exception as e:
+ _perror('cannot write header files: {}'.format(e))
+
+ # generate C source
+ c_src = generator.generate_c_src()
+
+ try:
+ _write_file(args.code_dir, '{}.c'.format(config.prefix.rstrip('_')),
+ c_src)
+ except Exception as e:
+ _perror('cannot write C source file: {}'.format(e))
--- /dev/null
+# The MIT License (MIT)
+#
+# Copyright (c) 2015 Philippe Proulx <pproulx@efficios.com>
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+
+
+class CodeGenerator:
+ def __init__(self, indent_string):
+ self._indent_string = indent_string
+ self.reset()
+
+ @property
+ def code(self):
+ return '\n'.join(self._lines)
+
+ def reset(self):
+ self._lines = []
+ self._indent = 0
+ self._glue = False
+
+ def add_line(self, line):
+ if self._glue:
+ self.append_to_last_line(line)
+ self._glue = False
+ return
+
+ indent_string = self._get_indent_string()
+ self._lines.append(indent_string + str(line))
+
+ def add_lines(self, lines):
+ if type(lines) is str:
+ lines = lines.split('\n')
+
+ for line in lines:
+ self.add_line(line)
+
+ def add_glue(self):
+ self._glue = True
+
+ def append_to_last_line(self, s):
+ if self._lines:
+ self._lines[-1] += str(s)
+
+ def add_empty_line(self):
+ self._lines.append('')
+
+ def add_cc_line(self, comment):
+ self.add_line('/* {} */'.format(comment))
+
+ def append_cc_to_last_line(self, comment, with_space=True):
+ if with_space:
+ sp = ' '
+ else:
+ sp = ''
+
+ self.append_to_last_line('{}/* {} */'.format(sp, comment))
+
+ def indent(self):
+ self._indent += 1
+
+ def unindent(self):
+ self._indent = max(self._indent - 1, 0)
+
+ def _get_indent_string(self):
+ return self._indent_string * self._indent
--- /dev/null
+# The MIT License (MIT)
+#
+# Copyright (c) 2015 Philippe Proulx <pproulx@efficios.com>
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+
+from barectf import metadata
+import collections
+import datetime
+import barectf
+import enum
+import yaml
+import uuid
+import copy
+import re
+
+
+class ConfigError(RuntimeError):
+ def __init__(self, msg, prev=None):
+ super().__init__(msg)
+ self._prev = prev
+
+ @property
+ def prev(self):
+ return self._prev
+
+
+class Config:
+ def __init__(self, version, prefix, metadata):
+ self.prefix = prefix
+ self.version = version
+ self.metadata = metadata
+
+ def _validate_metadata(self, meta):
+ try:
+ validator = _MetadataTypesHistologyValidator()
+ validator.validate(meta)
+ validator = _MetadataDynamicTypesValidator()
+ validator.validate(meta)
+ validator = _MetadataSpecialFieldsValidator()
+ validator.validate(meta)
+ except Exception as e:
+ raise ConfigError('metadata error', e)
+
+ try:
+ validator = _BarectfMetadataValidator()
+ validator.validate(meta)
+ except Exception as e:
+ raise ConfigError('barectf metadata error', e)
+
+ def _augment_metadata_env(self, meta):
+ env = meta.env
+
+ env['domain'] = 'bare'
+ env['tracer_name'] = 'barectf'
+ version_tuple = barectf.get_version_tuple()
+ env['tracer_major'] = version_tuple[0]
+ env['tracer_minor'] = version_tuple[1]
+ env['tracer_patch'] = version_tuple[2]
+ env['barectf_gen_date'] = str(datetime.datetime.now().isoformat())
+
+ @property
+ def version(self):
+ return self._version
+
+ @version.setter
+ def version(self, value):
+ self._version = value
+
+ @property
+ def metadata(self):
+ return self._metadata
+
+ @metadata.setter
+ def metadata(self, value):
+ self._validate_metadata(value)
+ self._augment_metadata_env(value)
+ self._metadata = value
+
+ @property
+ def prefix(self):
+ return self._prefix
+
+ @prefix.setter
+ def prefix(self, value):
+ if not is_valid_identifier(value):
+ raise ConfigError('prefix must be a valid C identifier')
+
+ self._prefix = value
+
+
+def _is_assoc_array_prop(node):
+ return isinstance(node, dict)
+
+
+def _is_array_prop(node):
+ return isinstance(node, list)
+
+
+def _is_int_prop(node):
+ return type(node) is int
+
+
+def _is_str_prop(node):
+ return type(node) is str
+
+
+def _is_bool_prop(node):
+ return type(node) is bool
+
+
+def _is_valid_alignment(align):
+ return ((align & (align - 1)) == 0) and align > 0
+
+
+def _byte_order_str_to_bo(bo_str):
+ bo_str = bo_str.lower()
+
+ if bo_str == 'le':
+ return metadata.ByteOrder.LE
+ elif bo_str == 'be':
+ return metadata.ByteOrder.BE
+
+
+def _encoding_str_to_encoding(encoding_str):
+ encoding_str = encoding_str.lower()
+
+ if encoding_str == 'utf-8' or encoding_str == 'utf8':
+ return metadata.Encoding.UTF8
+ elif encoding_str == 'ascii':
+ return metadata.Encoding.ASCII
+ elif encoding_str == 'none':
+ return metadata.Encoding.NONE
+
+
+_re_iden = re.compile(r'^[a-zA-Z][a-zA-Z0-9_]*$')
+_ctf_keywords = set([
+ 'align',
+ 'callsite',
+ 'clock',
+ 'enum',
+ 'env',
+ 'event',
+ 'floating_point',
+ 'integer',
+ 'stream',
+ 'string',
+ 'struct',
+ 'trace',
+ 'typealias',
+ 'typedef',
+ 'variant',
+])
+
+
+def is_valid_identifier(iden):
+ if not _re_iden.match(iden):
+ return False
+
+ if _re_iden in _ctf_keywords:
+ return False
+
+ return True
+
+
+def _get_first_unknown_prop(node, known_props):
+ for prop_name in node:
+ if prop_name in known_props:
+ continue
+
+ return prop_name
+
+
+def _get_first_unknown_type_prop(type_node, known_props):
+ kp = known_props + ['inherit', 'class']
+
+ return _get_first_unknown_prop(type_node, kp)
+
+
+# This validator validates the configured metadata for barectf specific
+# needs.
+#
+# barectf needs:
+#
+# * all header/contexts are at least byte-aligned
+# * all integer and floating point number sizes to be <= 64
+# * no inner structures, arrays, or variants
+class _BarectfMetadataValidator:
+ def __init__(self):
+ self._type_to_validate_type_func = {
+ metadata.Integer: self._validate_int_type,
+ metadata.FloatingPoint: self._validate_float_type,
+ metadata.Enum: self._validate_enum_type,
+ metadata.String: self._validate_string_type,
+ metadata.Struct: self._validate_struct_type,
+ metadata.Array: self._validate_array_type,
+ metadata.Variant: self._validate_variant_type,
+ }
+
+ def _validate_int_type(self, t, entity_root):
+ if t.size > 64:
+ raise ConfigError('integer type\'s size must be lesser than or equal to 64 bits')
+
+ def _validate_float_type(self, t, entity_root):
+ if t.size > 64:
+ raise ConfigError('floating point number type\'s size must be lesser than or equal to 64 bits')
+
+ def _validate_enum_type(self, t, entity_root):
+ if t.value_type.size > 64:
+ raise ConfigError('enumeration type\'s integer type\'s size must be lesser than or equal to 64 bits')
+
+ def _validate_string_type(self, t, entity_root):
+ pass
+
+ def _validate_struct_type(self, t, entity_root):
+ if not entity_root:
+ raise ConfigError('inner structure types are not supported as of this version')
+
+ for field_name, field_type in t.fields.items():
+ if entity_root and self._cur_entity is _Entity.TRACE_PACKET_HEADER:
+ if field_name == 'uuid':
+ # allow
+ continue
+
+ try:
+ self._validate_type(field_type, False)
+ except Exception as e:
+ raise ConfigError('in structure type\'s field "{}"'.format(field_name), e)
+
+ def _validate_array_type(self, t, entity_root):
+ raise ConfigError('array types are not supported as of this version')
+
+ def _validate_variant_type(self, t, entity_root):
+ raise ConfigError('variant types are not supported as of this version')
+
+ def _validate_type(self, t, entity_root):
+ self._type_to_validate_type_func[type(t)](t, entity_root)
+
+ def _validate_entity(self, t):
+ if t is None:
+ return
+
+ # make sure entity is byte-aligned
+ if t.align < 8:
+ raise ConfigError('type\'s alignment must be at least byte-aligned')
+
+ # make sure entity is a structure
+ if type(t) is not metadata.Struct:
+ raise ConfigError('expecting a structure type')
+
+ # validate types
+ self._validate_type(t, True)
+
+ def _validate_entities_and_names(self, meta):
+ self._cur_entity = _Entity.TRACE_PACKET_HEADER
+
+ try:
+ self._validate_entity(meta.trace.packet_header_type)
+ except Exception as e:
+ raise ConfigError('invalid trace packet header type', e)
+
+ for stream_name, stream in meta.streams.items():
+ if not is_valid_identifier(stream_name):
+ raise ConfigError('stream name "{}" is not a valid C identifier'.format(stream_name))
+
+ self._cur_entity = _Entity.STREAM_PACKET_CONTEXT
+
+ try:
+ self._validate_entity(stream.packet_context_type)
+ except Exception as e:
+ raise ConfigError('invalid packet context type in stream "{}"'.format(stream_name), e)
+
+ self._cur_entity = _Entity.STREAM_EVENT_HEADER
+
+ try:
+ self._validate_entity(stream.event_header_type)
+ except Exception as e:
+ raise ConfigError('invalid event header type in stream "{}"'.format(stream_name), e)
+
+ self._cur_entity = _Entity.STREAM_EVENT_CONTEXT
+
+ try:
+ self._validate_entity(stream.event_context_type)
+ except Exception as e:
+ raise ConfigError('invalid event context type in stream "{}"'.format(stream_name), e)
+
+ try:
+ for ev_name, ev in stream.events.items():
+ if not is_valid_identifier(ev_name):
+ raise ConfigError('event name "{}" is not a valid C identifier'.format(ev_name))
+
+ self._cur_entity = _Entity.EVENT_CONTEXT
+
+ try:
+ self._validate_entity(ev.context_type)
+ except Exception as e:
+ raise ConfigError('invalid context type in event "{}"'.format(ev_name), e)
+
+ self._cur_entity = _Entity.EVENT_PAYLOAD
+
+ if ev.payload_type is None:
+ raise ConfigError('missing payload type in event "{}"'.format(ev_name), e)
+
+ try:
+ self._validate_entity(ev.payload_type)
+ except Exception as e:
+ raise ConfigError('invalid payload type in event "{}"'.format(ev_name), e)
+
+ if not ev.payload_type.fields:
+ raise ConfigError('empty payload type in event "{}"'.format(ev_name), e)
+ except Exception as e:
+ raise ConfigError('invalid stream "{}"'.format(stream_name), e)
+
+ def validate(self, meta):
+ self._validate_entities_and_names(meta)
+
+
+# This validator validates special fields of trace, stream, and event
+# types. For example, if checks that the "stream_id" field exists in the
+# trace packet header if there's more than one stream, and much more.
+class _MetadataSpecialFieldsValidator:
+ def _validate_trace_packet_header_type(self, t):
+ # needs "stream_id" field?
+ if len(self._meta.streams) > 1:
+ # yes
+ if t is None:
+ raise ConfigError('need "stream_id" field in trace packet header type, but trace packet header type is missing')
+
+ if type(t) is not metadata.Struct:
+ raise ConfigError('need "stream_id" field in trace packet header type, but trace packet header type is not a structure type')
+
+ if 'stream_id' not in t.fields:
+ raise ConfigError('need "stream_id" field in trace packet header type')
+
+ # validate "magic" and "stream_id" types
+ if type(t) is not metadata.Struct:
+ return
+
+ for i, (field_name, field_type) in enumerate(t.fields.items()):
+ if field_name == 'magic':
+ if type(field_type) is not metadata.Integer:
+ raise ConfigError('"magic" field in trace packet header type must be an integer type')
+
+ if field_type.signed or field_type.size != 32:
+ raise ConfigError('"magic" field in trace packet header type must be a 32-bit unsigned integer type')
+
+ if i != 0:
+ raise ConfigError('"magic" field must be the first trace packet header type\'s field')
+ elif field_name == 'stream_id':
+ if type(field_type) is not metadata.Integer:
+ raise ConfigError('"stream_id" field in trace packet header type must be an integer type')
+
+ if field_type.signed:
+ raise ConfigError('"stream_id" field in trace packet header type must be an unsigned integer type')
+ elif field_name == 'uuid':
+ if self._meta.trace.uuid is None:
+ raise ConfigError('"uuid" field in trace packet header type specified, but no trace UUID provided')
+
+ if type(field_type) is not metadata.Array:
+ raise ConfigError('"uuid" field in trace packet header type must be an array')
+
+ if field_type.length != 16:
+ raise ConfigError('"uuid" field in trace packet header type must be an array of 16 bytes')
+
+ element_type = field_type.element_type
+
+ if type(element_type) is not metadata.Integer:
+ raise ConfigError('"uuid" field in trace packet header type must be an array of 16 bytes')
+
+ if element_type.size != 8:
+ raise ConfigError('"uuid" field in trace packet header type must be an array of 16 bytes')
+
+ if element_type.align != 8:
+ raise ConfigError('"uuid" field in trace packet header type must be an array of 16 byte-aligned bytes')
+
+ def _validate_trace(self, meta):
+ self._validate_trace_packet_header_type(meta.trace.packet_header_type)
+
+ def _validate_stream_packet_context(self, stream):
+ t = stream.packet_context_type
+
+ if type(t) is None:
+ return
+
+ if type(t) is not metadata.Struct:
+ return
+
+ # "timestamp_begin", if exists, is an unsigned integer type,
+ # mapped to a clock
+ if 'timestamp_begin' in t.fields:
+ ts_begin = t.fields['timestamp_begin']
+
+ if type(ts_begin) is not metadata.Integer:
+ raise ConfigError('"timestamp_begin" field in stream packet context type must be an integer type')
+
+ if ts_begin.signed:
+ raise ConfigError('"timestamp_begin" field in stream packet context type must be an unsigned integer type')
+
+ if not ts_begin.property_mappings:
+ raise ConfigError('"timestamp_begin" field in stream packet context type must be mapped to a clock')
+
+ # "timestamp_end", if exists, is an unsigned integer type,
+ # mapped to a clock
+ if 'timestamp_end' in t.fields:
+ ts_end = t.fields['timestamp_end']
+
+ if type(ts_end) is not metadata.Integer:
+ raise ConfigError('"timestamp_end" field in stream packet context type must be an integer type')
+
+ if ts_end.signed:
+ raise ConfigError('"timestamp_end" field in stream packet context type must be an unsigned integer type')
+
+ if not ts_end.property_mappings:
+ raise ConfigError('"timestamp_end" field in stream packet context type must be mapped to a clock')
+
+ # "timestamp_begin" and "timestamp_end" exist together
+ if (('timestamp_begin' in t.fields) ^ ('timestamp_end' in t.fields)):
+ raise ConfigError('"timestamp_begin" and "timestamp_end" fields must be defined together in stream packet context type')
+
+ # "events_discarded", if exists, is an unsigned integer type
+ if 'events_discarded' in t.fields:
+ events_discarded = t.fields['events_discarded']
+
+ if type(events_discarded) is not metadata.Integer:
+ raise ConfigError('"events_discarded" field in stream packet context type must be an integer type')
+
+ if events_discarded.signed:
+ raise ConfigError('"events_discarded" field in stream packet context type must be an unsigned integer type')
+
+ # "packet_size" and "content_size" must exist
+ if 'packet_size' not in t.fields:
+ raise ConfigError('missing "packet_size" field in stream packet context type')
+
+ packet_size = t.fields['packet_size']
+
+ # "content_size" and "content_size" must exist
+ if 'content_size' not in t.fields:
+ raise ConfigError('missing "content_size" field in stream packet context type')
+
+ content_size = t.fields['content_size']
+
+ # "packet_size" is an unsigned integer type
+ if type(packet_size) is not metadata.Integer:
+ raise ConfigError('"packet_size" field in stream packet context type must be an integer type')
+
+ if packet_size.signed:
+ raise ConfigError('"packet_size" field in stream packet context type must be an unsigned integer type')
+
+ # "content_size" is an unsigned integer type
+ if type(content_size) is not metadata.Integer:
+ raise ConfigError('"content_size" field in stream packet context type must be an integer type')
+
+ if content_size.signed:
+ raise ConfigError('"content_size" field in stream packet context type must be an unsigned integer type')
+
+ def _validate_stream_event_header(self, stream):
+ t = stream.event_header_type
+
+ # needs "id" field?
+ if len(stream.events) > 1:
+ # yes
+ if t is None:
+ raise ConfigError('need "id" field in stream event header type, but stream event header type is missing')
+
+ if type(t) is not metadata.Struct:
+ raise ConfigError('need "id" field in stream event header type, but stream event header type is not a structure type')
+
+ if 'id' not in t.fields:
+ raise ConfigError('need "id" field in stream event header type')
+
+ # validate "id" and "timestamp" types
+ if type(t) is not metadata.Struct:
+ return
+
+ # "timestamp", if exists, is an unsigned integer type,
+ # mapped to a clock
+ if 'timestamp' in t.fields:
+ ts = t.fields['timestamp']
+
+ if type(ts) is not metadata.Integer:
+ raise ConfigError('"ts" field in stream event header type must be an integer type')
+
+ if ts.signed:
+ raise ConfigError('"ts" field in stream event header type must be an unsigned integer type')
+
+ if not ts.property_mappings:
+ raise ConfigError('"ts" field in stream event header type must be mapped to a clock')
+
+ # "id" is an unsigned integer type
+ if 'id' in t.fields:
+ eid = t.fields['id']
+
+ if type(eid) is not metadata.Integer:
+ raise ConfigError('"id" field in stream event header type must be an integer type')
+
+ if eid.signed:
+ raise ConfigError('"id" field in stream event header type must be an unsigned integer type')
+
+ def _validate_stream(self, stream):
+ self._validate_stream_packet_context(stream)
+ self._validate_stream_event_header(stream)
+
+ def validate(self, meta):
+ self._meta = meta
+ self._validate_trace(meta)
+
+ for stream in meta.streams.values():
+ try:
+ self._validate_stream(stream)
+ except Exception as e:
+ raise ConfigError('invalid stream "{}"'.format(stream.name), e)
+
+
+class _MetadataDynamicTypesValidatorStackEntry:
+ def __init__(self, base_t):
+ self._base_t = base_t
+ self._index = 0
+
+ @property
+ def index(self):
+ return self._index
+
+ @index.setter
+ def index(self, value):
+ self._index = value
+
+ @property
+ def base_t(self):
+ return self._base_t
+
+ @base_t.setter
+ def base_t(self, value):
+ self._base_t = value
+
+
+# Entities. Order of values is important here.
+@enum.unique
+class _Entity(enum.IntEnum):
+ TRACE_PACKET_HEADER = 0
+ STREAM_PACKET_CONTEXT = 1
+ STREAM_EVENT_HEADER = 2
+ STREAM_EVENT_CONTEXT = 3
+ EVENT_CONTEXT = 4
+ EVENT_PAYLOAD = 5
+
+
+# This validator validates dynamic metadata types, that is, it ensures
+# variable-length array lengths and variant tags actually point to
+# something that exists. It also checks that variable-length array
+# lengths point to integer types and variant tags to enumeration types.
+class _MetadataDynamicTypesValidator:
+ def __init__(self):
+ self._type_to_visit_type_func = {
+ metadata.Integer: None,
+ metadata.FloatingPoint: None,
+ metadata.Enum: None,
+ metadata.String: None,
+ metadata.Struct: self._visit_struct_type,
+ metadata.Array: self._visit_array_type,
+ metadata.Variant: self._visit_variant_type,
+ }
+
+ self._cur_trace = None
+ self._cur_stream = None
+ self._cur_event = None
+
+ def _lookup_path_from_base(self, path, parts, base, start_index,
+ base_is_current, from_t):
+ index = start_index
+ cur_t = base
+ found_path = []
+
+ while index < len(parts):
+ part = parts[index]
+ next_t = None
+
+ if type(cur_t) is metadata.Struct:
+ enumerated_items = enumerate(cur_t.fields.items())
+
+ # lookup each field
+ for i, (field_name, field_type) in enumerated_items:
+ if field_name == part:
+ next_t = field_type
+ found_path.append((i, field_type))
+
+ if next_t is None:
+ raise ConfigError('invalid path "{}": cannot find field "{}" in structure type'.format(path, part))
+ elif type(cur_t) is metadata.Variant:
+ enumerated_items = enumerate(cur_t.types.items())
+
+ # lookup each type
+ for i, (type_name, type_type) in enumerated_items:
+ if type_name == part:
+ next_t = type_type
+ found_path.append((i, type_type))
+
+ if next_t is None:
+ raise ConfigError('invalid path "{}": cannot find type "{}" in variant type'.format(path, part))
+ else:
+ raise ConfigError('invalid path "{}": requesting "{}" in a non-variant, non-structure type'.format(path, part))
+
+ cur_t = next_t
+ index += 1
+
+ # make sure that the pointed type is not the pointing type
+ if from_t is cur_t:
+ raise ConfigError('invalid path "{}": pointing to self'.format(path))
+
+ # if we're here, we found the type; however, it could be located
+ # _after_ the variant/VLA looking for it, if the pointing
+ # and pointed types are in the same entity, so compare the
+ # current stack entries indexes to our index path in that case
+ if not base_is_current:
+ return cur_t
+
+ for index, entry in enumerate(self._stack):
+ if index == len(found_path):
+ # end of index path; valid so far
+ break
+
+ if found_path[index][0] > entry.index:
+ raise ConfigError('invalid path "{}": pointed type is after pointing type'.format(path))
+
+ # also make sure that both pointed and pointing types share
+ # a common structure ancestor
+ for index, entry in enumerate(self._stack):
+ if index == len(found_path):
+ break
+
+ if entry.base_t is not found_path[index][1]:
+ # found common ancestor
+ if type(entry.base_t) is metadata.Variant:
+ raise ConfigError('invalid path "{}": type cannot be reached because pointed and pointing types are in the same variant type'.format(path))
+
+ return cur_t
+
+ def _lookup_path_from_top(self, path, parts):
+ if len(parts) != 1:
+ raise ConfigError('invalid path "{}": multipart relative path not supported'.format(path))
+
+ find_name = parts[0]
+ index = len(self._stack) - 1
+ got_struct = False
+
+ # check stack entries in reversed order
+ for entry in reversed(self._stack):
+ # structure base type
+ if type(entry.base_t) is metadata.Struct:
+ got_struct = True
+ enumerated_items = enumerate(entry.base_t.fields.items())
+
+ # lookup each field, until the current visiting index is met
+ for i, (field_name, field_type) in enumerated_items:
+ if i == entry.index:
+ break
+
+ if field_name == find_name:
+ return field_type
+
+ # variant base type
+ elif type(entry.base_t) is metadata.Variant:
+ enumerated_items = enumerate(entry.base_t.types.items())
+
+ # lookup each type, until the current visiting index is met
+ for i, (type_name, type_type) in enumerated_items:
+ if i == entry.index:
+ break
+
+ if type_name == find_name:
+ if not got_struct:
+ raise ConfigError('invalid path "{}": type cannot be reached because pointed and pointing types are in the same variant type'.format(path))
+
+ return type_type
+
+ # nothing returned here: cannot find type
+ raise ConfigError('invalid path "{}": cannot find type in current context'.format(path))
+
+ def _lookup_path(self, path, from_t):
+ parts = path.lower().split('.')
+ base = None
+ base_is_current = False
+
+ if len(parts) >= 3:
+ if parts[0] == 'trace':
+ if parts[1] == 'packet' and parts[2] == 'header':
+ # make sure packet header exists
+ if self._cur_trace.packet_header_type is None:
+ raise ConfigError('invalid path "{}": no defined trace packet header type'.format(path))
+
+ base = self._cur_trace.packet_header_type
+
+ if self._cur_entity == _Entity.TRACE_PACKET_HEADER:
+ base_is_current = True
+ else:
+ raise ConfigError('invalid path "{}": unknown names after "trace"'.format(path))
+ elif parts[0] == 'stream':
+ if parts[1] == 'packet' and parts[2] == 'context':
+ if self._cur_entity < _Entity.STREAM_PACKET_CONTEXT:
+ raise ConfigError('invalid path "{}": cannot access stream packet context here'.format(path))
+
+ if self._cur_stream.packet_context_type is None:
+ raise ConfigError('invalid path "{}": no defined stream packet context type'.format(path))
+
+ base = self._cur_stream.packet_context_type
+
+ if self._cur_entity == _Entity.STREAM_PACKET_CONTEXT:
+ base_is_current = True
+ elif parts[1] == 'event':
+ if parts[2] == 'header':
+ if self._cur_entity < _Entity.STREAM_EVENT_HEADER:
+ raise ConfigError('invalid path "{}": cannot access stream event header here'.format(path))
+
+ if self._cur_stream.event_header_type is None:
+ raise ConfigError('invalid path "{}": no defined stream event header type'.format(path))
+
+ base = self._cur_stream.event_header_type
+
+ if self._cur_entity == _Entity.STREAM_EVENT_HEADER:
+ base_is_current = True
+ elif parts[2] == 'context':
+ if self._cur_entity < _Entity.STREAM_EVENT_CONTEXT:
+ raise ConfigError('invalid path "{}": cannot access stream event context here'.format(path))
+
+ if self._cur_stream.event_context_type is None:
+ raise ConfigError('invalid path "{}": no defined stream event context type'.format(path))
+
+ base = self._cur_stream.event_context_type
+
+ if self._cur_entity == _Entity.STREAM_EVENT_CONTEXT:
+ base_is_current = True
+ else:
+ raise ConfigError('invalid path "{}": unknown names after "stream.event"'.format(path))
+ else:
+ raise ConfigError('invalid path "{}": unknown names after "stream"'.format(path))
+
+ if base is not None:
+ start_index = 3
+
+ if len(parts) >= 2 and base is None:
+ if parts[0] == 'event':
+ if parts[1] == 'context':
+ if self._cur_entity < _Entity.EVENT_CONTEXT:
+ raise ConfigError('invalid path "{}": cannot access event context here'.format(path))
+
+ if self._cur_event.context_type is None:
+ raise ConfigError('invalid path "{}": no defined event context type'.format(path))
+
+ base = self._cur_event.context_type
+
+ if self._cur_entity == _Entity.EVENT_CONTEXT:
+ base_is_current = True
+ elif parts[1] == 'payload' or parts[1] == 'fields':
+ if self._cur_entity < _Entity.EVENT_PAYLOAD:
+ raise ConfigError('invalid path "{}": cannot access event payload here'.format(path))
+
+ if self._cur_event.payload_type is None:
+ raise ConfigError('invalid path "{}": no defined event payload type'.format(path))
+
+ base = self._cur_event.payload_type
+
+ if self._cur_entity == _Entity.EVENT_PAYLOAD:
+ base_is_current = True
+ else:
+ raise ConfigError('invalid path "{}": unknown names after "event"'.format(path))
+
+ if base is not None:
+ start_index = 2
+
+ if base is not None:
+ return self._lookup_path_from_base(path, parts, base, start_index,
+ base_is_current, from_t)
+ else:
+ return self._lookup_path_from_top(path, parts)
+
+ def _stack_reset(self):
+ self._stack = []
+
+ def _stack_push(self, base_t):
+ entry = _MetadataDynamicTypesValidatorStackEntry(base_t)
+ self._stack.append(entry)
+
+ def _stack_pop(self):
+ self._stack.pop()
+
+ def _stack_incr_index(self):
+ self._stack[-1].index += 1
+
+ def _visit_struct_type(self, t):
+ self._stack_push(t)
+
+ for field_name, field_type in t.fields.items():
+ try:
+ self._visit_type(field_type)
+ except Exception as e:
+ raise ConfigError('in structure type\'s field "{}"'.format(field_name), e)
+
+ self._stack_incr_index()
+
+ self._stack_pop()
+
+ def _visit_array_type(self, t):
+ if not t.is_static:
+ # find length type
+ try:
+ length_type = self._lookup_path(t.length, t)
+ except Exception as e:
+ raise ConfigError('invalid array type\'s length', e)
+
+ # make sure length type an unsigned integer
+ if type(length_type) is not metadata.Integer:
+ raise ConfigError('array type\'s length does not point to an integer type')
+
+ if length_type.signed:
+ raise ConfigError('array type\'s length does not point to an unsigned integer type')
+
+ self._visit_type(t.element_type)
+
+ def _visit_variant_type(self, t):
+ # find tag type
+ try:
+ tag_type = self._lookup_path(t.tag, t)
+ except Exception as e:
+ raise ConfigError('invalid variant type\'s tag', e)
+
+ # make sure tag type is an enumeration
+ if type(tag_type) is not metadata.Enum:
+ raise ConfigError('variant type\'s tag does not point to an enumeration type')
+
+ # verify that each variant type's type exists as an enumeration member
+ for tag_name in t.types.keys():
+ if tag_name not in tag_type.members:
+ raise ConfigError('cannot find variant type\'s type "{}" in pointed tag type'.format(tag_name))
+
+ self._stack_push(t)
+
+ for type_name, type_type in t.types.items():
+ try:
+ self._visit_type(type_type)
+ except Exception as e:
+ raise ConfigError('in variant type\'s type "{}"'.format(type_name), e)
+
+ self._stack_incr_index()
+
+ self._stack_pop()
+
+ def _visit_type(self, t):
+ if t is None:
+ return
+
+ if type(t) in self._type_to_visit_type_func:
+ func = self._type_to_visit_type_func[type(t)]
+
+ if func is not None:
+ func(t)
+
+ def _visit_event(self, ev):
+ ev_name = ev.name
+
+ # set current event
+ self._cur_event = ev
+
+ # visit event context type
+ self._stack_reset()
+ self._cur_entity = _Entity.EVENT_CONTEXT
+
+ try:
+ self._visit_type(ev.context_type)
+ except Exception as e:
+ raise ConfigError('invalid context type in event "{}"'.format(ev_name), e)
+
+ # visit event payload type
+ self._stack_reset()
+ self._cur_entity = _Entity.EVENT_PAYLOAD
+
+ try:
+ self._visit_type(ev.payload_type)
+ except Exception as e:
+ raise ConfigError('invalid payload type in event "{}"'.format(ev_name), e)
+
+ def _visit_stream(self, stream):
+ stream_name = stream.name
+
+ # set current stream
+ self._cur_stream = stream
+
+ # reset current event
+ self._cur_event = None
+
+ # visit stream packet context type
+ self._stack_reset()
+ self._cur_entity = _Entity.STREAM_PACKET_CONTEXT
+
+ try:
+ self._visit_type(stream.packet_context_type)
+ except Exception as e:
+ raise ConfigError('invalid packet context type in stream "{}"'.format(stream_name), e)
+
+ # visit stream event header type
+ self._stack_reset()
+ self._cur_entity = _Entity.STREAM_EVENT_HEADER
+
+ try:
+ self._visit_type(stream.event_header_type)
+ except Exception as e:
+ raise ConfigError('invalid event header type in stream "{}"'.format(stream_name), e)
+
+ # visit stream event context type
+ self._stack_reset()
+ self._cur_entity = _Entity.STREAM_EVENT_CONTEXT
+
+ try:
+ self._visit_type(stream.event_context_type)
+ except Exception as e:
+ raise ConfigError('invalid event context type in stream "{}"'.format(stream_name), e)
+
+ # visit events
+ for ev in stream.events.values():
+ try:
+ self._visit_event(ev)
+ except Exception as e:
+ raise ConfigError('invalid stream "{}"'.format(stream_name))
+
+ def validate(self, meta):
+ # set current trace
+ self._cur_trace = meta.trace
+
+ # visit trace packet header type
+ self._stack_reset()
+ self._cur_entity = _Entity.TRACE_PACKET_HEADER
+
+ try:
+ self._visit_type(meta.trace.packet_header_type)
+ except Exception as e:
+ raise ConfigError('invalid packet header type in trace', e)
+
+ # visit streams
+ for stream in meta.streams.values():
+ self._visit_stream(stream)
+
+
+# Since type inheritance allows types to be only partially defined at
+# any place in the configuration, this validator validates that actual
+# trace, stream, and event types are all complete and valid.
+class _MetadataTypesHistologyValidator:
+ def __init__(self):
+ self._type_to_validate_type_histology_func = {
+ metadata.Integer: self._validate_integer_histology,
+ metadata.FloatingPoint: self._validate_float_histology,
+ metadata.Enum: self._validate_enum_histology,
+ metadata.String: self._validate_string_histology,
+ metadata.Struct: self._validate_struct_histology,
+ metadata.Array: self._validate_array_histology,
+ metadata.Variant: self._validate_variant_histology,
+ }
+
+ def _validate_integer_histology(self, t):
+ # size is set
+ if t.size is None:
+ raise ConfigError('missing integer type\'s size')
+
+ def _validate_float_histology(self, t):
+ # exponent digits is set
+ if t.exp_size is None:
+ raise ConfigError('missing floating point number type\'s exponent size')
+
+ # mantissa digits is set
+ if t.mant_size is None:
+ raise ConfigError('missing floating point number type\'s mantissa size')
+
+ # exponent and mantissa sum is a multiple of 8
+ if (t.exp_size + t.mant_size) % 8 != 0:
+ raise ConfigError('floating point number type\'s mantissa and exponent sizes sum must be a multiple of 8')
+
+ def _validate_enum_histology(self, t):
+ # integer type is set
+ if t.value_type is None:
+ raise ConfigError('missing enumeration type\'s integer type')
+
+ # there's at least one member
+ if not t.members:
+ raise ConfigError('enumeration type needs at least one member')
+
+ # no overlapping values
+ ranges = []
+
+ for label, value in t.members.items():
+ for rg in ranges:
+ if value[0] <= rg[1] and rg[0] <= value[1]:
+ raise ConfigError('enumeration type\'s member "{}" overlaps another member'.format(label))
+
+ ranges.append(value)
+
+ def _validate_string_histology(self, t):
+ # always valid
+ pass
+
+ def _validate_struct_histology(self, t):
+ # all fields are valid
+ for field_name, field_type in t.fields.items():
+ try:
+ self._validate_type_histology(field_type)
+ except Exception as e:
+ raise ConfigError('invalid structure type\'s field "{}"'.format(field_name), e)
+
+ def _validate_array_histology(self, t):
+ # length is set
+ if t.length is None:
+ raise ConfigError('missing array type\'s length')
+
+ # element type is set
+ if t.element_type is None:
+ raise ConfigError('missing array type\'s element type')
+
+ # element type is valid
+ try:
+ self._validate_type_histology(t.element_type)
+ except Exception as e:
+ raise ConfigError('invalid array type\'s element type', e)
+
+ def _validate_variant_histology(self, t):
+ # tag is set
+ if t.tag is None:
+ raise ConfigError('missing variant type\'s tag')
+
+ # there's at least one type
+ if not t.types:
+ raise ConfigError('variant type needs at least one type')
+
+ # all types are valid
+ for type_name, type_t in t.types.items():
+ try:
+ self._validate_type_histology(type_t)
+ except Exception as e:
+ raise ConfigError('invalid variant type\'s type "{}"'.format(type_name), e)
+
+ def _validate_type_histology(self, t):
+ if t is None:
+ return
+
+ self._type_to_validate_type_histology_func[type(t)](t)
+
+ def _validate_entity_type_histology(self, t):
+ if t is None:
+ return
+
+ # entity cannot be an array
+ if type(t) is metadata.Array:
+ raise ConfigError('cannot use an array here')
+
+ self._validate_type_histology(t)
+
+ def _validate_event_types_histology(self, ev):
+ ev_name = ev.name
+
+ # validate event context type
+ try:
+ self._validate_entity_type_histology(ev.context_type)
+ except Exception as e:
+ raise ConfigError('invalid event context type for event "{}"'.format(ev_name), e)
+
+ # validate event payload type
+ if ev.payload_type is None:
+ raise ConfigError('event payload type must exist in event "{}"'.format(ev_name))
+
+ # TODO: also check arrays, sequences, and variants
+ if type(ev.payload_type) is metadata.Struct:
+ if not ev.payload_type.fields:
+ raise ConfigError('event payload type must have at least one field for event "{}"'.format(ev_name))
+
+ try:
+ self._validate_entity_type_histology(ev.payload_type)
+ except Exception as e:
+ raise ConfigError('invalid event payload type for event "{}"'.format(ev_name), e)
+
+ def _validate_stream_types_histology(self, stream):
+ stream_name = stream.name
+
+ # validate stream packet context type
+ try:
+ self._validate_entity_type_histology(stream.packet_context_type)
+ except Exception as e:
+ raise ConfigError('invalid stream packet context type for stream "{}"'.format(stream_name), e)
+
+ # validate stream event header type
+ try:
+ self._validate_entity_type_histology(stream.event_header_type)
+ except Exception as e:
+ raise ConfigError('invalid stream event header type for stream "{}"'.format(stream_name), e)
+
+ # validate stream event context type
+ try:
+ self._validate_entity_type_histology(stream.event_context_type)
+ except Exception as e:
+ raise ConfigError('invalid stream event context type for stream "{}"'.format(stream_name), e)
+
+ # validate events
+ for ev in stream.events.values():
+ try:
+ self._validate_event_types_histology(ev)
+ except Exception as e:
+ raise ConfigError('invalid event in stream "{}"'.format(stream_name), e)
+
+ def validate(self, meta):
+ # validate trace packet header type
+ try:
+ self._validate_entity_type_histology(meta.trace.packet_header_type)
+ except Exception as e:
+ raise ConfigError('invalid trace packet header type', e)
+
+ # validate streams
+ for stream in meta.streams.values():
+ self._validate_stream_types_histology(stream)
+
+
+class _YamlConfigParser:
+ def __init__(self):
+ self._class_name_to_create_type_func = {
+ 'int': self._create_integer,
+ 'integer': self._create_integer,
+ 'flt': self._create_float,
+ 'float': self._create_float,
+ 'floating-point': self._create_float,
+ 'enum': self._create_enum,
+ 'enumeration': self._create_enum,
+ 'str': self._create_string,
+ 'string': self._create_string,
+ 'struct': self._create_struct,
+ 'structure': self._create_struct,
+ 'array': self._create_array,
+ 'var': self._create_variant,
+ 'variant': self._create_variant,
+ }
+ self._type_to_create_type_func = {
+ metadata.Integer: self._create_integer,
+ metadata.FloatingPoint: self._create_float,
+ metadata.Enum: self._create_enum,
+ metadata.String: self._create_string,
+ metadata.Struct: self._create_struct,
+ metadata.Array: self._create_array,
+ metadata.Variant: self._create_variant,
+ }
+
+ def _set_byte_order(self, metadata_node):
+ if 'trace' not in metadata_node:
+ raise ConfigError('missing "trace" property (metadata)')
+
+ trace_node = metadata_node['trace']
+
+ if not _is_assoc_array_prop(trace_node):
+ raise ConfigError('"trace" property (metadata) must be an associative array')
+
+ if 'byte-order' not in trace_node:
+ raise ConfigError('missing "byte-order" property (trace)')
+
+ self._bo = _byte_order_str_to_bo(trace_node['byte-order'])
+
+ if self._bo is None:
+ raise ConfigError('invalid "byte-order" property (trace): must be "le" or "be"')
+
+ def _lookup_type_alias(self, name):
+ if name in self._tas:
+ return copy.deepcopy(self._tas[name])
+
+ def _set_int_clock_prop_mapping(self, int_obj, prop_mapping_node):
+ unk_prop = _get_first_unknown_prop(prop_mapping_node, ['type', 'name', 'property'])
+
+ if unk_prop:
+ raise ConfigError('unknown property in integer type object\'s clock property mapping: "{}"'.format(unk_prop))
+
+ if 'name' not in prop_mapping_node:
+ raise ConfigError('missing "name" property in integer type object\'s clock property mapping')
+
+ if 'property' not in prop_mapping_node:
+ raise ConfigError('missing "property" property in integer type object\'s clock property mapping')
+
+ clock_name = prop_mapping_node['name']
+ prop = prop_mapping_node['property']
+
+ if not _is_str_prop(clock_name):
+ raise ConfigError('"name" property of integer type object\'s clock property mapping must be a string')
+
+ if not _is_str_prop(prop):
+ raise ConfigError('"property" property of integer type object\'s clock property mapping must be a string')
+
+ if clock_name not in self._clocks:
+ raise ConfigError('invalid clock name "{}" in integer type object\'s clock property mapping'.format(clock_name))
+
+ if prop != 'value':
+ raise ConfigError('invalid "property" property in integer type object\'s clock property mapping: "{}"'.format(prop))
+
+ mapped_clock = self._clocks[clock_name]
+ int_obj.property_mappings.append(metadata.PropertyMapping(mapped_clock, prop))
+
+ def _create_integer(self, obj, node):
+ if obj is None:
+ # create integer object
+ obj = metadata.Integer()
+
+ unk_prop = _get_first_unknown_type_prop(node, [
+ 'size',
+ 'align',
+ 'signed',
+ 'byte-order',
+ 'base',
+ 'encoding',
+ 'property-mappings',
+ ])
+
+ if unk_prop:
+ raise ConfigError('unknown integer type object property: "{}"'.format(unk_prop))
+
+ # size
+ if 'size' in node:
+ size = node['size']
+
+ if not _is_int_prop(size):
+ raise ConfigError('"size" property of integer type object must be an integer')
+
+ if size < 1:
+ raise ConfigError('invalid integer size: {}'.format(size))
+
+ obj.size = size
+
+ # align
+ if 'align' in node:
+ align = node['align']
+
+ if not _is_int_prop(align):
+ raise ConfigError('"align" property of integer type object must be an integer')
+
+ if not _is_valid_alignment(align):
+ raise ConfigError('invalid alignment: {}'.format(align))
+
+ obj.align = align
+
+ # signed
+ if 'signed' in node:
+ signed = node['signed']
+
+ if not _is_bool_prop(signed):
+ raise ConfigError('"signed" property of integer type object must be a boolean')
+
+ obj.signed = signed
+
+ # byte order
+ if 'byte-order' in node:
+ byte_order = node['byte-order']
+
+ if not _is_str_prop(byte_order):
+ raise ConfigError('"byte-order" property of integer type object must be a string ("le" or "be")')
+
+ byte_order = _byte_order_str_to_bo(byte_order)
+
+ if byte_order is None:
+ raise ConfigError('invalid "byte-order" property in integer type object')
+ else:
+ byte_order = self._bo
+
+ obj.byte_order = byte_order
+
+ # base
+ if 'base' in node:
+ base = node['base']
+
+ if not _is_str_prop(base):
+ raise ConfigError('"base" property of integer type object must be a string ("bin", "oct", "dec", or "hex")')
+
+ if base == 'bin':
+ base = 2
+ elif base == 'oct':
+ base = 8
+ elif base == 'dec':
+ base = 10
+ elif base == 'hex':
+ base = 16
+
+ obj.base = base
+
+ # encoding
+ if 'encoding' in node:
+ encoding = node['encoding']
+
+ if not _is_str_prop(encoding):
+ raise ConfigError('"encoding" property of integer type object must be a string ("none", "ascii", or "utf-8")')
+
+ encoding = _encoding_str_to_encoding(encoding)
+
+ if encoding is None:
+ raise ConfigError('invalid "encoding" property in integer type object')
+
+ obj.encoding = encoding
+
+ # property mappings
+ if 'property-mappings' in node:
+ prop_mappings = node['property-mappings']
+
+ if not _is_array_prop(prop_mappings):
+ raise ConfigError('"property-mappings" property of integer type object must be an array')
+
+ if len(prop_mappings) > 1:
+ raise ConfigError('length of "property-mappings" array in integer type object must be 1')
+
+ del obj.property_mappings[:]
+
+ for index, prop_mapping in enumerate(prop_mappings):
+ if not _is_assoc_array_prop(prop_mapping):
+ raise ConfigError('elements of "property-mappings" property of integer type object must be associative arrays')
+
+ if 'type' not in prop_mapping:
+ raise ConfigError('missing "type" property in integer type object\'s "property-mappings" array\'s element #{}'.format(index))
+
+ prop_type = prop_mapping['type']
+
+ if not _is_str_prop(prop_type):
+ raise ConfigError('"type" property of integer type object\'s "property-mappings" array\'s element #{} must be a string'.format(index))
+
+ if prop_type == 'clock':
+ self._set_int_clock_prop_mapping(obj, prop_mapping)
+ else:
+ raise ConfigError('unknown property mapping type "{}" in integer type object\'s "property-mappings" array\'s element #{}'.format(prop_type, index))
+
+ return obj
+
+ def _create_float(self, obj, node):
+ if obj is None:
+ # create floating point number object
+ obj = metadata.FloatingPoint()
+
+ unk_prop = _get_first_unknown_type_prop(node, [
+ 'size',
+ 'align',
+ 'byte-order',
+ ])
+
+ if unk_prop:
+ raise ConfigError('unknown floating point number type object property: "{}"'.format(unk_prop))
+
+ # size
+ if 'size' in node:
+ size = node['size']
+
+ if not _is_assoc_array_prop(size):
+ raise ConfigError('"size" property of floating point number type object must be an associative array')
+
+ unk_prop = _get_first_unknown_prop(node, ['exp', 'mant'])
+
+ if 'exp' in size:
+ exp = size['exp']
+
+ if not _is_int_prop(exp):
+ raise ConfigError('"exp" property of floating point number type object\'s "size" property must be an integer')
+
+ if exp < 1:
+ raise ConfigError('invalid floating point number exponent size: {}')
+
+ obj.exp_size = exp
+
+ if 'mant' in size:
+ mant = size['mant']
+
+ if not _is_int_prop(mant):
+ raise ConfigError('"mant" property of floating point number type object\'s "size" property must be an integer')
+
+ if mant < 1:
+ raise ConfigError('invalid floating point number mantissa size: {}')
+
+ obj.mant_size = mant
+
+ # align
+ if 'align' in node:
+ align = node['align']
+
+ if not _is_int_prop(align):
+ raise ConfigError('"align" property of floating point number type object must be an integer')
+
+ if not _is_valid_alignment(align):
+ raise ConfigError('invalid alignment: {}'.format(align))
+
+ obj.align = align
+
+ # byte order
+ if 'byte-order' in node:
+ byte_order = node['byte-order']
+
+ if not _is_str_prop(byte_order):
+ raise ConfigError('"byte-order" property of floating point number type object must be a string ("le" or "be")')
+
+ byte_order = _byte_order_str_to_bo(byte_order)
+
+ if byte_order is None:
+ raise ConfigError('invalid "byte-order" property in floating point number type object')
+ else:
+ byte_order = self._bo
+
+ obj.byte_order = byte_order
+
+ return obj
+
+ def _create_enum(self, obj, node):
+ if obj is None:
+ # create enumeration object
+ obj = metadata.Enum()
+
+ unk_prop = _get_first_unknown_type_prop(node, [
+ 'value-type',
+ 'members',
+ ])
+
+ if unk_prop:
+ raise ConfigError('unknown enumeration type object property: "{}"'.format(unk_prop))
+
+ # value type
+ if 'value-type' in node:
+ try:
+ obj.value_type = self._create_type(node['value-type'])
+ except Exception as e:
+ raise ConfigError('cannot create enumeration type\'s integer type', e)
+
+ # members
+ if 'members' in node:
+ members_node = node['members']
+
+ if not _is_array_prop(members_node):
+ raise ConfigError('"members" property of enumeration type object must be an array')
+
+ cur = 0
+
+ for index, m_node in enumerate(members_node):
+ if not _is_str_prop(m_node) and not _is_assoc_array_prop(m_node):
+ raise ConfigError('invalid enumeration member #{}: expecting a string or an associative array'.format(index))
+
+ if _is_str_prop(m_node):
+ label = m_node
+ value = (cur, cur)
+ cur += 1
+ else:
+ if 'label' not in m_node:
+ raise ConfigError('missing "label" property in enumeration member #{}'.format(index))
+
+ label = m_node['label']
+
+ if not _is_str_prop(label):
+ raise ConfigError('"label" property of enumeration member #{} must be a string'.format(index))
+
+ if 'value' not in m_node:
+ raise ConfigError('missing "value" property in enumeration member ("{}")'.format(label))
+
+ value = m_node['value']
+
+ if not _is_int_prop(value) and not _is_array_prop(value):
+ raise ConfigError('invalid enumeration member ("{}"): expecting an integer or an array'.format(label))
+
+ if _is_int_prop(value):
+ cur = value + 1
+ value = (value, value)
+ else:
+ if len(value) != 2:
+ raise ConfigError('invalid enumeration member ("{}"): range must have exactly two items'.format(label))
+
+ mn = value[0]
+ mx = value[1]
+
+ if mn > mx:
+ raise ConfigError('invalid enumeration member ("{}"): invalid range ({} > {})'.format(label, mn, mx))
+
+ value = (mn, mx)
+ cur = mx + 1
+
+ obj.members[label] = value
+
+ return obj
+
+ def _create_string(self, obj, node):
+ if obj is None:
+ # create string object
+ obj = metadata.String()
+
+ unk_prop = _get_first_unknown_type_prop(node, [
+ 'encoding',
+ ])
+
+ if unk_prop:
+ raise ConfigError('unknown string type object property: "{}"'.format(unk_prop))
+
+ # encoding
+ if 'encoding' in node:
+ encoding = node['encoding']
+
+ if not _is_str_prop(encoding):
+ raise ConfigError('"encoding" property of string type object must be a string ("none", "ascii", or "utf-8")')
+
+ encoding = _encoding_str_to_encoding(encoding)
+
+ if encoding is None:
+ raise ConfigError('invalid "encoding" property in string type object')
+
+ obj.encoding = encoding
+
+ return obj
+
+ def _create_struct(self, obj, node):
+ if obj is None:
+ # create structure object
+ obj = metadata.Struct()
+
+ unk_prop = _get_first_unknown_type_prop(node, [
+ 'min-align',
+ 'fields',
+ ])
+
+ if unk_prop:
+ raise ConfigError('unknown string type object property: "{}"'.format(unk_prop))
+
+ # minimum alignment
+ if 'min-align' in node:
+ min_align = node['min-align']
+
+ if not _is_int_prop(min_align):
+ raise ConfigError('"min-align" property of structure type object must be an integer')
+
+ if not _is_valid_alignment(min_align):
+ raise ConfigError('invalid minimum alignment: {}'.format(min_align))
+
+ obj.min_align = min_align
+
+ # fields
+ if 'fields' in node:
+ fields = node['fields']
+
+ if not _is_assoc_array_prop(fields):
+ raise ConfigError('"fields" property of structure type object must be an associative array')
+
+ for field_name, field_node in fields.items():
+ if not is_valid_identifier(field_name):
+ raise ConfigError('"{}" is not a valid field name for structure type'.format(field_name))
+
+ try:
+ obj.fields[field_name] = self._create_type(field_node)
+ except Exception as e:
+ raise ConfigError('cannot create structure type\'s field "{}"'.format(field_name), e)
+
+ return obj
+
+ def _create_array(self, obj, node):
+ if obj is None:
+ # create array object
+ obj = metadata.Array()
+
+ unk_prop = _get_first_unknown_type_prop(node, [
+ 'length',
+ 'element-type',
+ ])
+
+ if unk_prop:
+ raise ConfigError('unknown array type object property: "{}"'.format(unk_prop))
+
+ # length
+ if 'length' in node:
+ length = node['length']
+
+ if not _is_int_prop(length) and not _is_str_prop(length):
+ raise ConfigError('"length" property of array type object must be an integer or a string')
+
+ if type(length) is int and length < 0:
+ raise ConfigError('invalid static array length: {}'.format(length))
+
+ obj.length = length
+
+ # element type
+ if 'element-type' in node:
+ try:
+ obj.element_type = self._create_type(node['element-type'])
+ except Exception as e:
+ raise ConfigError('cannot create array type\'s element type', e)
+
+ return obj
+
+ def _create_variant(self, obj, node):
+ if obj is None:
+ # create variant object
+ obj = metadata.Variant()
+
+ unk_prop = _get_first_unknown_type_prop(node, [
+ 'tag',
+ 'types',
+ ])
+
+ if unk_prop:
+ raise ConfigError('unknown variant type object property: "{}"'.format(unk_prop))
+
+ # tag
+ if 'tag' in node:
+ tag = node['tag']
+
+ if not _is_str_prop(tag):
+ raise ConfigError('"tag" property of variant type object must be a string')
+
+ # do not validate variant tag for the moment; will be done in a
+ # second phase
+ obj.tag = tag
+
+ # element type
+ if 'types' in node:
+ types = node['types']
+
+ if not _is_assoc_array_prop(types):
+ raise ConfigError('"types" property of variant type object must be an associative array')
+
+ # do not validate type names for the moment; will be done in a
+ # second phase
+ for type_name, type_node in types.items():
+ if not is_valid_identifier(type_name):
+ raise ConfigError('"{}" is not a valid type name for variant type'.format(type_name))
+
+ try:
+ obj.types[type_name] = self._create_type(type_node)
+ except Exception as e:
+ raise ConfigError('cannot create variant type\'s type "{}"'.format(type_name), e)
+
+ return obj
+
+ def _create_type(self, type_node):
+ if type(type_node) is str:
+ t = self._lookup_type_alias(type_node)
+
+ if t is None:
+ raise ConfigError('unknown type alias "{}"'.format(type_node))
+
+ return t
+
+ if not _is_assoc_array_prop(type_node):
+ raise ConfigError('type objects must be associative arrays')
+
+ if 'inherit' in type_node and 'class' in type_node:
+ raise ConfigError('cannot specify both "inherit" and "class" properties in type object')
+
+ if 'inherit' in type_node:
+ inherit = type_node['inherit']
+
+ if not _is_str_prop(inherit):
+ raise ConfigError('"inherit" property of type object must be a string')
+
+ base = self._lookup_type_alias(inherit)
+
+ if base is None:
+ raise ConfigError('cannot inherit from type alias "{}": type alias does not exist'.format(inherit))
+
+ func = self._type_to_create_type_func[type(base)]
+ else:
+ if 'class' not in type_node:
+ raise ConfigError('type objects which do not inherit must have a "class" property')
+
+ class_name = type_node['class']
+
+ if type(class_name) is not str:
+ raise ConfigError('type objects\' "class" property must be a string')
+
+ if class_name not in self._class_name_to_create_type_func:
+ raise ConfigError('unknown type class "{}"'.format(class_name))
+
+ base = None
+ func = self._class_name_to_create_type_func[class_name]
+
+ return func(base, type_node)
+
+ def _register_type_aliases(self, metadata_node):
+ self._tas = dict()
+
+ if 'type-aliases' not in metadata_node:
+ return
+
+ ta_node = metadata_node['type-aliases']
+
+ if not _is_assoc_array_prop(ta_node):
+ raise ConfigError('"type-aliases" property (metadata) must be an associative array')
+
+ for ta_name, ta_type in ta_node.items():
+ if ta_name in self._tas:
+ raise ConfigError('duplicate type alias "{}"'.format(ta_name))
+
+ try:
+ t = self._create_type(ta_type)
+ except Exception as e:
+ raise ConfigError('cannot create type alias "{}"'.format(ta_name), e)
+
+ self._tas[ta_name] = t
+
+ def _create_clock(self, node):
+ # create clock object
+ clock = metadata.Clock()
+
+ unk_prop = _get_first_unknown_prop(node, [
+ 'uuid',
+ 'description',
+ 'freq',
+ 'error-cycles',
+ 'offset',
+ 'absolute',
+ 'return-ctype',
+ ])
+
+ if unk_prop:
+ raise ConfigError('unknown clock object property: "{}"'.format(unk_prop))
+
+ # UUID
+ if 'uuid' in node:
+ uuidp = node['uuid']
+
+ if not _is_str_prop(uuidp):
+ raise ConfigError('"uuid" property of clock object must be a string')
+
+ try:
+ uuidp = uuid.UUID(uuidp)
+ except:
+ raise ConfigError('malformed UUID (clock object): "{}"'.format(uuidp))
+
+ clock.uuid = uuidp
+
+ # description
+ if 'description' in node:
+ desc = node['description']
+
+ if not _is_str_prop(desc):
+ raise ConfigError('"description" property of clock object must be a string')
+
+ clock.description = desc
+
+ # frequency
+ if 'freq' in node:
+ freq = node['freq']
+
+ if not _is_int_prop(freq):
+ raise ConfigError('"freq" property of clock object must be an integer')
+
+ if freq < 1:
+ raise ConfigError('invalid clock frequency: {}'.format(freq))
+
+ clock.freq = freq
+
+ # error cycles
+ if 'error-cycles' in node:
+ error_cycles = node['error-cycles']
+
+ if not _is_int_prop(error_cycles):
+ raise ConfigError('"error-cycles" property of clock object must be an integer')
+
+ if error_cycles < 0:
+ raise ConfigError('invalid clock error cycles: {}'.format(error_cycles))
+
+ clock.error_cycles = error_cycles
+
+ # offset
+ if 'offset' in node:
+ offset = node['offset']
+
+ if not _is_assoc_array_prop(offset):
+ raise ConfigError('"offset" property of clock object must be an associative array')
+
+ unk_prop = _get_first_unknown_prop(offset, ['cycles', 'seconds'])
+
+ if unk_prop:
+ raise ConfigError('unknown clock object\'s offset property: "{}"'.format(unk_prop))
+
+ # cycles
+ if 'cycles' in offset:
+ offset_cycles = offset['cycles']
+
+ if not _is_int_prop(offset_cycles):
+ raise ConfigError('"cycles" property of clock object\'s offset property must be an integer')
+
+ if offset_cycles < 0:
+ raise ConfigError('invalid clock offset cycles: {}'.format(offset_cycles))
+
+ clock.offset_cycles = offset_cycles
+
+ # seconds
+ if 'seconds' in offset:
+ offset_seconds = offset['seconds']
+
+ if not _is_int_prop(offset_seconds):
+ raise ConfigError('"seconds" property of clock object\'s offset property must be an integer')
+
+ if offset_seconds < 0:
+ raise ConfigError('invalid clock offset seconds: {}'.format(offset_seconds))
+
+ clock.offset_seconds = offset_seconds
+
+ # absolute
+ if 'absolute' in node:
+ absolute = node['absolute']
+
+ if not _is_bool_prop(absolute):
+ raise ConfigError('"absolute" property of clock object must be a boolean')
+
+ clock.absolute = absolute
+
+ # return C type
+ if 'return-ctype' in node:
+ ctype = node['return-ctype']
+
+ if not _is_str_prop(ctype):
+ raise ConfigError('"return-ctype" property of clock object must be a string')
+
+ clock.return_ctype = ctype
+
+ return clock
+
+ def _register_clocks(self, metadata_node):
+ self._clocks = collections.OrderedDict()
+
+ if 'clocks' not in metadata_node:
+ return
+
+ clocks_node = metadata_node['clocks']
+
+ if not _is_assoc_array_prop(clocks_node):
+ raise ConfigError('"clocks" property (metadata) must be an associative array')
+
+ for clock_name, clock_node in clocks_node.items():
+ if not is_valid_identifier(clock_name):
+ raise ConfigError('invalid clock name: "{}"'.format(clock_name))
+
+ if clock_name in self._clocks:
+ raise ConfigError('duplicate clock "{}"'.format(clock_name))
+
+ try:
+ clock = self._create_clock(clock_node)
+ except Exception as e:
+ raise ConfigError('cannot create clock "{}"'.format(clock_name), e)
+
+ clock.name = clock_name
+ self._clocks[clock_name] = clock
+
+ def _create_env(self, metadata_node):
+ env = collections.OrderedDict()
+
+ if 'env' not in metadata_node:
+ return env
+
+ env_node = metadata_node['env']
+
+ if not _is_assoc_array_prop(env_node):
+ raise ConfigError('"env" property (metadata) must be an associative array')
+
+ for env_name, env_value in env_node.items():
+ if env_name in env:
+ raise ConfigError('duplicate environment variable "{}"'.format(env_name))
+
+ if not is_valid_identifier(env_name):
+ raise ConfigError('invalid environment variable name: "{}"'.format(env_name))
+
+ if not _is_int_prop(env_value) and not _is_str_prop(env_value):
+ raise ConfigError('invalid environment variable value ("{}"): expecting integer or string'.format(env_name))
+
+ env[env_name] = env_value
+
+ return env
+
+ def _register_log_levels(self, metadata_node):
+ self._log_levels = dict()
+
+ if 'log-levels' not in metadata_node:
+ return
+
+ log_levels_node = metadata_node['log-levels']
+
+ if not _is_assoc_array_prop(log_levels_node):
+ raise ConfigError('"log-levels" property (metadata) must be an associative array')
+
+ for ll_name, ll_value in log_levels_node.items():
+ if ll_name in self._log_levels:
+ raise ConfigError('duplicate log level entry "{}"'.format(ll_name))
+
+ if not _is_int_prop(ll_value):
+ raise ConfigError('invalid log level entry ("{}"): expecting an integer'.format(ll_name))
+
+ self._log_levels[ll_name] = ll_value
+
+ def _create_trace(self, metadata_node):
+ # create trace object
+ trace = metadata.Trace()
+ trace_node = metadata_node['trace']
+ unk_prop = _get_first_unknown_prop(trace_node, [
+ 'byte-order',
+ 'uuid',
+ 'packet-header-type',
+ ])
+
+ if unk_prop:
+ raise ConfigError('unknown trace object property: "{}"'.format(unk_prop))
+
+ # set byte order (already parsed)
+ trace.byte_order = self._bo
+
+ # UUID
+ if 'uuid' in trace_node:
+ uuidp = trace_node['uuid']
+
+ if not _is_str_prop(uuidp):
+ raise ConfigError('"uuid" property of trace object must be a string')
+
+ if uuidp == 'auto':
+ uuidp = uuid.uuid1()
+ else:
+ try:
+ uuidp = uuid.UUID(uuidp)
+ except:
+ raise ConfigError('malformed UUID (trace object): "{}"'.format(uuidp))
+
+ trace.uuid = uuidp
+
+ # packet header type
+ if 'packet-header-type' in trace_node:
+ try:
+ ph_type = self._create_type(trace_node['packet-header-type'])
+ except Exception as e:
+ raise ConfigError('cannot create packet header type (trace)', e)
+
+ trace.packet_header_type = ph_type
+
+ return trace
+
+ def _lookup_log_level(self, ll):
+ if _is_int_prop(ll):
+ return ll
+ elif _is_str_prop(ll) and ll in self._log_levels:
+ return self._log_levels[ll]
+
+ def _create_event(self, event_node):
+ event = metadata.Event()
+ unk_prop = _get_first_unknown_prop(event_node, [
+ 'log-level',
+ 'context-type',
+ 'payload-type',
+ ])
+
+ if unk_prop:
+ raise ConfigError('unknown event object property: "{}"'.format(unk_prop))
+
+ if not _is_assoc_array_prop(event_node):
+ raise ConfigError('event objects must be associative arrays')
+
+ if 'log-level' in event_node:
+ ll = self._lookup_log_level(event_node['log-level'])
+
+ if ll is None:
+ raise ConfigError('invalid "log-level" property')
+
+ event.log_level = ll
+
+ if 'context-type' in event_node:
+ try:
+ t = self._create_type(event_node['context-type'])
+ except Exception as e:
+ raise ConfigError('cannot create event\'s context type object', e)
+
+ event.context_type = t
+
+ if 'payload-type' not in event_node:
+ raise ConfigError('missing "payload-type" property in event object')
+
+ try:
+ t = self._create_type(event_node['payload-type'])
+ except Exception as e:
+ raise ConfigError('cannot create event\'s payload type object', e)
+
+ event.payload_type = t
+
+ return event
+
+ def _create_stream(self, stream_node):
+ stream = metadata.Stream()
+ unk_prop = _get_first_unknown_prop(stream_node, [
+ 'packet-context-type',
+ 'event-header-type',
+ 'event-context-type',
+ 'events',
+ ])
+
+ if unk_prop:
+ raise ConfigError('unknown stream object property: "{}"'.format(unk_prop))
+
+ if not _is_assoc_array_prop(stream_node):
+ raise ConfigError('stream objects must be associative arrays')
+
+ if 'packet-context-type' in stream_node:
+ try:
+ t = self._create_type(stream_node['packet-context-type'])
+ except Exception as e:
+ raise ConfigError('cannot create stream\'s packet context type object', e)
+
+ stream.packet_context_type = t
+
+ if 'event-header-type' in stream_node:
+ try:
+ t = self._create_type(stream_node['event-header-type'])
+ except Exception as e:
+ raise ConfigError('cannot create stream\'s event header type object', e)
+
+ stream.event_header_type = t
+
+ if 'event-context-type' in stream_node:
+ try:
+ t = self._create_type(stream_node['event-context-type'])
+ except Exception as e:
+ raise ConfigError('cannot create stream\'s event context type object', e)
+
+ stream.event_context_type = t
+
+ if 'events' not in stream_node:
+ raise ConfigError('missing "events" property in stream object')
+
+ events = stream_node['events']
+
+ if not _is_assoc_array_prop(events):
+ raise ConfigError('"events" property of stream object must be an associative array')
+
+ if not events:
+ raise ConfigError('at least one event is needed within a stream object')
+
+ cur_id = 0
+
+ for ev_name, ev_node in events.items():
+ try:
+ ev = self._create_event(ev_node)
+ except Exception as e:
+ raise ConfigError('cannot create event "{}"'.format(ev_name), e)
+
+ ev.id = cur_id
+ ev.name = ev_name
+ stream.events[ev_name] = ev
+ cur_id += 1
+
+ return stream
+
+ def _create_streams(self, metadata_node):
+ streams = collections.OrderedDict()
+
+ if 'streams' not in metadata_node:
+ raise ConfigError('missing "streams" property (metadata)')
+
+ streams_node = metadata_node['streams']
+
+ if not _is_assoc_array_prop(streams_node):
+ raise ConfigError('"streams" property (metadata) must be an associative array')
+
+ if not streams_node:
+ raise ConfigError('at least one stream is needed (metadata)')
+
+ cur_id = 0
+
+ for stream_name, stream_node in streams_node.items():
+ try:
+ stream = self._create_stream(stream_node)
+ except Exception as e:
+ raise ConfigError('cannot create stream "{}"'.format(stream_name), e)
+
+ stream.id = cur_id
+ stream.name = str(stream_name)
+ streams[stream_name] = stream
+ cur_id += 1
+
+ return streams
+
+ def _create_metadata(self, root):
+ meta = metadata.Metadata()
+
+ if 'metadata' not in root:
+ raise ConfigError('missing "metadata" property (root)')
+
+ metadata_node = root['metadata']
+ unk_prop = _get_first_unknown_prop(metadata_node, [
+ 'type-aliases',
+ 'log-levels',
+ 'trace',
+ 'env',
+ 'clocks',
+ 'streams',
+ ])
+
+ if unk_prop:
+ raise ConfigError('unknown metadata property: "{}"'.format(unk_prop))
+
+ if not _is_assoc_array_prop(metadata_node):
+ raise ConfigError('"metadata" property (root) must be an associative array')
+
+ self._set_byte_order(metadata_node)
+ self._register_clocks(metadata_node)
+ meta.clocks = self._clocks
+ self._register_type_aliases(metadata_node)
+ meta.env = self._create_env(metadata_node)
+ meta.trace = self._create_trace(metadata_node)
+ self._register_log_levels(metadata_node)
+ meta.streams = self._create_streams(metadata_node)
+
+ return meta
+
+ def _get_version(self, root):
+ if 'version' not in root:
+ raise ConfigError('missing "version" property (root)')
+
+ version_node = root['version']
+
+ if not _is_str_prop(version_node):
+ raise ConfigError('"version" property (root) must be a string')
+
+ if version_node != '2.0':
+ raise ConfigError('unsupported version: {}'.format(version_node))
+
+ return version_node
+
+ def _get_prefix(self, root):
+ if 'prefix' not in root:
+ return 'barectf_'
+
+ prefix_node = root['prefix']
+
+ if not _is_str_prop(prefix_node):
+ raise ConfigError('"prefix" property (root) must be a string')
+
+ if not is_valid_identifier(prefix_node):
+ raise ConfigError('"prefix" property (root) must be a valid C identifier')
+
+ return prefix_node
+
+ def _yaml_ordered_load(self, stream):
+ class OLoader(yaml.Loader):
+ pass
+
+ def construct_mapping(loader, node):
+ loader.flatten_mapping(node)
+
+ return collections.OrderedDict(loader.construct_pairs(node))
+
+ OLoader.add_constructor(yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG,
+ construct_mapping)
+
+ return yaml.load(stream, OLoader)
+
+ def parse(self, yml):
+ try:
+ root = self._yaml_ordered_load(yml)
+ except Exception as e:
+ raise ConfigError('cannot parse YAML input', e)
+
+ if not _is_assoc_array_prop(root):
+ raise ConfigError('root must be an associative array')
+
+ self._version = self._get_version(root)
+ meta = self._create_metadata(root)
+ prefix = self._get_prefix(root)
+
+ return Config(self._version, prefix, meta)
+
+
+def from_yaml(yml):
+ parser = _YamlConfigParser()
+ cfg = parser.parse(yml)
+
+ return cfg
+
+
+def from_yaml_file(path):
+ try:
+ with open(path) as f:
+ return from_yaml(f.read())
+ except Exception as e:
+ raise ConfigError('cannot create configuration from YAML file'.format(e), e)
--- /dev/null
+# The MIT License (MIT)
+#
+# Copyright (c) 2014-2015 Philippe Proulx <pproulx@efficios.com>
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+
+from barectf import templates
+from barectf import metadata
+import barectf.codegen
+import collections
+import argparse
+import datetime
+import barectf
+import sys
+import os
+import re
+
+
+class _StaticAlignSizeAutomaton:
+ def __init__(self):
+ self._byte_offset = 0
+ self._type_to_update_byte_offset_func = {
+ metadata.Integer: self.write_static_size,
+ metadata.FloatingPoint: self.write_static_size,
+ metadata.Enum: self.write_static_size,
+ metadata.String: self.reset,
+ }
+
+ @property
+ def byte_offset(self):
+ return self._byte_offset
+
+ @byte_offset.setter
+ def byte_offset(self, value):
+ self._byte_offset = value
+
+ def _wrap_byte_offset(self):
+ self._byte_offset %= 8
+
+ def align(self, align):
+ # align byte offset
+ self._byte_offset = (self._byte_offset + (align - 1)) & -align
+
+ # wrap on current byte
+ self._wrap_byte_offset()
+
+ def write_type(self, t):
+ self._type_to_update_byte_offset_func[type(t)](t)
+
+ def write_static_size(self, t):
+ # increment byte offset
+ self._byte_offset += t.size
+
+ # wrap on current byte
+ self._wrap_byte_offset()
+
+ def reset(self, t=None):
+ # reset byte offset (strings are always byte-aligned)
+ self._byte_offset = 0
+
+ def set_unknown(self):
+ self._byte_offset = None
+
+
+_PREFIX_TPH = 'tph_'
+_PREFIX_SPC = 'spc_'
+_PREFIX_SEH = 'seh_'
+_PREFIX_SEC = 'sec_'
+_PREFIX_EC = 'ec_'
+_PREFIX_EP = 'ep_'
+
+
+class CCodeGenerator:
+ def __init__(self, cfg):
+ self._cfg = cfg
+ self._cg = barectf.codegen.CodeGenerator('\t')
+ self._type_to_get_ctype_func = {
+ metadata.Integer: self._get_int_ctype,
+ metadata.FloatingPoint: self._get_float_ctype,
+ metadata.Enum: self._get_enum_ctype,
+ metadata.String: self._get_string_ctype,
+ }
+ self._type_to_generate_serialize_func = {
+ metadata.Integer: self._generate_serialize_int,
+ metadata.FloatingPoint: self._generate_serialize_float,
+ metadata.Enum: self._generate_serialize_enum,
+ metadata.String: self._generate_serialize_string,
+ }
+ self._saved_byte_offsets = {}
+ self._sasa = _StaticAlignSizeAutomaton()
+
+ def _generate_ctx_parent(self):
+ tmpl = templates._CTX_PARENT
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix))
+
+ def _generate_ctx(self, stream):
+ tmpl = templates._CTX_BEGIN
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix,
+ sname=stream.name))
+ tmpl = 'uint32_t off_tph_{fname};'
+ self._cg.indent()
+ trace_packet_header_type = self._cfg.metadata.trace.packet_header_type
+
+ if trace_packet_header_type is not None:
+ for field_name in trace_packet_header_type.fields:
+ self._cg.add_lines(tmpl.format(fname=field_name))
+
+ tmpl = 'uint32_t off_spc_{fname};'
+
+ if stream.packet_context_type is not None:
+ for field_name in stream.packet_context_type.fields:
+ self._cg.add_lines(tmpl.format(fname=field_name))
+
+ self._cg.unindent()
+ tmpl = templates._CTX_END
+ self._cg.add_lines(tmpl)
+
+ def _generate_ctxs(self):
+ for stream in self._cfg.metadata.streams.values():
+ self._generate_ctx(stream)
+
+ def _generate_clock_cb(self, clock):
+ tmpl = templates._CLOCK_CB
+ self._cg.add_lines(tmpl.format(return_ctype=clock.return_ctype,
+ cname=clock.name))
+
+ def _generate_clock_cbs(self):
+ for clock in self._cfg.metadata.clocks.values():
+ self._generate_clock_cb(clock)
+
+ def _generate_platform_callbacks(self):
+ tmpl = templates._PLATFORM_CALLBACKS_BEGIN
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix))
+ self._cg.indent()
+ self._generate_clock_cbs()
+ self._cg.unindent()
+ tmpl = templates._PLATFORM_CALLBACKS_END
+ self._cg.add_lines(tmpl)
+
+ def generate_bitfield_header(self):
+ self._cg.reset()
+ tmpl = templates._BITFIELD
+ tmpl = tmpl.replace('$prefix$', self._cfg.prefix)
+ tmpl = tmpl.replace('$PREFIX$', self._cfg.prefix.upper())
+
+ if self._cfg.metadata.trace.byte_order == metadata.ByteOrder.BE:
+ endian_def = 'BIG_ENDIAN'
+ else:
+ endian_def = 'LITTLE_ENDIAN'
+
+ tmpl = tmpl.replace('$ENDIAN_DEF$', endian_def)
+ self._cg.add_lines(tmpl)
+
+ return self._cg.code
+
+ def _generate_func_init_proto(self):
+ tmpl = templates._FUNC_INIT_PROTO
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix))
+
+ def _get_int_ctype(self, t):
+ signed = 'u' if not t.signed else ''
+
+ if t.size <= 8:
+ sz = '8'
+ elif t.size <= 16:
+ sz = '16'
+ elif t.size <= 32:
+ sz = '32'
+ elif t.size == 64:
+ sz = '64'
+
+ return '{}int{}_t'.format(signed, sz)
+
+ def _get_float_ctype(self, t):
+ if t.exp_size == 8 and t.mant_size == 24 and t.align == 32:
+ ctype = 'float'
+ elif t.exp_size == 11 and t.mant_size == 53 and t.align == 64:
+ ctype = 'double'
+ else:
+ ctype = 'uint64_t'
+
+ return ctype
+
+ def _get_enum_ctype(self, t):
+ return self._get_int_ctype(t.value_type)
+
+ def _get_string_ctype(self, t):
+ return 'const char *'
+
+ def _get_type_ctype(self, t):
+ return self._type_to_get_ctype_func[type(t)](t)
+
+ def _generate_type_ctype(self, t):
+ ctype = self._get_type_ctype(t)
+ self._cg.append_to_last_line(ctype)
+
+ def _generate_proto_param(self, t, name):
+ self._generate_type_ctype(t)
+ self._cg.append_to_last_line(' ')
+ self._cg.append_to_last_line(name)
+
+ def _generate_proto_params(self, t, name_prefix, exclude_list):
+ self._cg.indent()
+
+ for field_name, field_type in t.fields.items():
+ if field_name in exclude_list:
+ continue
+
+ name = name_prefix + field_name
+ self._cg.append_to_last_line(',')
+ self._cg.add_line('')
+ self._generate_proto_param(field_type, name)
+
+ self._cg.unindent()
+
+ def _generate_func_open_proto(self, stream):
+ tmpl = templates._FUNC_OPEN_PROTO_BEGIN
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix,
+ sname=stream.name))
+ trace_packet_header_type = self._cfg.metadata.trace.packet_header_type
+
+ if trace_packet_header_type is not None:
+ exclude_list = ['magic', 'stream_id', 'uuid']
+ self._generate_proto_params(trace_packet_header_type, _PREFIX_TPH,
+ exclude_list)
+
+ if stream.packet_context_type is not None:
+ exclude_list = [
+ 'timestamp_begin',
+ 'timestamp_end',
+ 'packet_size',
+ 'content_size',
+ 'events_discarded',
+ ]
+ self._generate_proto_params(stream.packet_context_type,
+ _PREFIX_SPC, exclude_list)
+
+ tmpl = templates._FUNC_OPEN_PROTO_END
+ self._cg.add_lines(tmpl)
+
+ def _generate_func_close_proto(self, stream):
+ tmpl = templates._FUNC_CLOSE_PROTO
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix,
+ sname=stream.name))
+
+ def _generate_func_trace_proto_params(self, stream, event):
+ if stream.event_header_type is not None:
+ exclude_list = [
+ 'id',
+ 'timestamp',
+ ]
+ self._generate_proto_params(stream.event_header_type,
+ _PREFIX_SEH, exclude_list)
+
+ if stream.event_context_type is not None:
+ self._generate_proto_params(stream.event_context_type,
+ _PREFIX_SEC, [])
+
+ if event.context_type is not None:
+ self._generate_proto_params(event.context_type,
+ _PREFIX_EC, [])
+
+ if event.payload_type is not None:
+ self._generate_proto_params(event.payload_type,
+ _PREFIX_EP, [])
+
+ def _generate_func_trace_proto(self, stream, event):
+ tmpl = templates._FUNC_TRACE_PROTO_BEGIN
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix,
+ sname=stream.name, evname=event.name))
+ self._generate_func_trace_proto_params(stream, event)
+ tmpl = templates._FUNC_TRACE_PROTO_END
+ self._cg.add_lines(tmpl)
+
+ def _punctuate_proto(self):
+ self._cg.append_to_last_line(';')
+
+ def generate_header(self):
+ self._cg.reset()
+ dt = datetime.datetime.now().isoformat()
+ bh_filename = self.get_bitfield_header_filename()
+ tmpl = templates._HEADER_BEGIN
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix,
+ ucprefix=self._cfg.prefix.upper(),
+ bitfield_header_filename=bh_filename,
+ version=barectf.__version__, date=dt))
+ self._cg.add_empty_line()
+
+ # platform callbacks structure
+ self._generate_platform_callbacks()
+ self._cg.add_empty_line()
+
+ # context parent
+ self._generate_ctx_parent()
+ self._cg.add_empty_line()
+
+ # stream contexts
+ self._generate_ctxs()
+ self._cg.add_empty_line()
+
+ # initialization function prototype
+ self._generate_func_init_proto()
+ self._punctuate_proto()
+ self._cg.add_empty_line()
+
+ for stream in self._cfg.metadata.streams.values():
+ self._generate_func_open_proto(stream)
+ self._punctuate_proto()
+ self._cg.add_empty_line()
+ self._generate_func_close_proto(stream)
+ self._punctuate_proto()
+ self._cg.add_empty_line()
+
+ for ev in stream.events.values():
+ self._generate_func_trace_proto(stream, ev)
+ self._punctuate_proto()
+ self._cg.add_empty_line()
+
+ tmpl = templates._HEADER_END
+ self._cg.add_lines(tmpl.format(ucprefix=self._cfg.prefix.upper()))
+
+ return self._cg.code
+
+ def _get_call_event_param_list_from_struct(self, t, prefix, exclude_list):
+ lst = ''
+
+ for field_name in t.fields:
+ if field_name in exclude_list:
+ continue
+
+ lst += ', {}{}'.format(prefix, field_name)
+
+ return lst
+
+ def _get_call_event_param_list(self, stream, event):
+ lst = ''
+ gcp_func = self._get_call_event_param_list_from_struct
+
+ if stream.event_header_type is not None:
+ exclude_list = [
+ 'id',
+ 'timestamp',
+ ]
+ lst += gcp_func(stream.event_header_type, _PREFIX_SEH, exclude_list)
+
+ if stream.event_context_type is not None:
+ lst += gcp_func(stream.event_context_type, _PREFIX_SEC, [])
+
+ if event.context_type is not None:
+ lst += gcp_func(event.context_type, _PREFIX_EC, [])
+
+ if event.payload_type is not None:
+ lst += gcp_func(event.payload_type, _PREFIX_EP, [])
+
+ return lst
+
+ def _generate_align(self, at, align):
+ self._cg.add_line('_ALIGN({}, {});'.format(at, align))
+ self._sasa.align(align)
+
+ def _generate_align_type(self, at, t):
+ if t.align == 1:
+ return
+
+ self._generate_align(at, t.align)
+
+ def _generate_incr_pos(self, var, value):
+ self._cg.add_line('{} += {};'.format(var, value))
+
+ def _generate_incr_pos_bytes(self, var, value):
+ self._generate_incr_pos(var, '_BYTES_TO_BITS({})'.format(value))
+
+ def _generate_func_get_event_size_proto(self, stream, event):
+ tmpl = templates._FUNC_GET_EVENT_SIZE_PROTO_BEGIN
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix,
+ sname=stream.name, evname=event.name))
+ self._generate_func_trace_proto_params(stream, event)
+ tmpl = templates._FUNC_GET_EVENT_SIZE_PROTO_END
+ self._cg.add_lines(tmpl)
+
+ def _generate_func_get_event_size_from_entity(self, prefix, t):
+ self._cg.add_line('{')
+ self._cg.indent()
+ self._cg.add_cc_line('align structure')
+ self._generate_align_type('at', t)
+
+ for field_name, field_type in t.fields.items():
+ self._cg.add_empty_line()
+ self._generate_field_name_cc_line(field_name)
+ self._generate_align_type('at', field_type)
+
+ if type(field_type) is metadata.String:
+ param = prefix + field_name
+ self._generate_incr_pos_bytes('at',
+ 'strlen({}) + 1'.format(param))
+ else:
+ self._generate_incr_pos('at', field_type.size)
+
+ self._cg.unindent()
+ self._cg.add_line('}')
+ self._cg.add_empty_line()
+
+ def _generate_func_get_event_size(self, stream, event):
+ self._generate_func_get_event_size_proto(stream, event)
+ tmpl = templates._FUNC_GET_EVENT_SIZE_BODY_BEGIN
+ self._cg.add_lines(tmpl)
+ self._cg.add_empty_line()
+ self._cg.indent()
+ func = self._generate_func_get_event_size_from_entity
+
+ if stream.event_header_type is not None:
+ self._cg.add_cc_line('stream event header')
+ func(_PREFIX_SEH, stream.event_header_type)
+
+ if stream.event_context_type is not None:
+ self._cg.add_cc_line('stream event context')
+ func(_PREFIX_SEC, stream.event_context_type)
+
+ if event.context_type is not None:
+ self._cg.add_cc_line('event context')
+ func(_PREFIX_EC, event.context_type)
+
+ if event.payload_type is not None:
+ self._cg.add_cc_line('event payload')
+ func(_PREFIX_EP, event.payload_type)
+
+ self._cg.unindent()
+ tmpl = templates._FUNC_GET_EVENT_SIZE_BODY_END
+ self._cg.add_lines(tmpl)
+
+ def _generate_func_serialize_event_proto(self, stream, event):
+ tmpl = templates._FUNC_SERIALIZE_EVENT_PROTO_BEGIN
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix,
+ sname=stream.name, evname=event.name))
+ self._generate_func_trace_proto_params(stream, event)
+ tmpl = templates._FUNC_SERIALIZE_EVENT_PROTO_END
+ self._cg.add_lines(tmpl)
+
+ def _generate_bitfield_write(self, var, ctx, t):
+ ptr = '&{ctx}->buf[_BITS_TO_BYTES({ctx}->at)]'.format(ctx=ctx)
+ start = self._sasa.byte_offset
+ suffix = 'le' if t.byte_order is metadata.ByteOrder.LE else 'be'
+ func = '{}bt_bitfield_write_{}'.format(self._cfg.prefix, suffix)
+ call = '{}({}, uint8_t, {}, {}, {});'.format(func, ptr, start, t.size,
+ var)
+ self._cg.add_line(call)
+
+ def _generate_serialize_int(self, var, ctx, t):
+ self._generate_bitfield_write(var, ctx, t)
+ self._generate_incr_pos('{}->at'.format(ctx), t.size)
+
+ def _generate_serialize_float(self, var, ctx, t):
+ ctype = self._get_type_ctype(t)
+
+ if ctype == 'float':
+ ctype = 'uint32_t'
+ elif ctype == 'double':
+ ctype = 'uint64_t'
+
+ var_casted = '*(({}*) &{})'.format(ctype, var)
+ self._generate_bitfield_write(var_casted, ctx, t)
+ self._generate_incr_pos('{}->at'.format(ctx), t.size)
+
+ def _generate_serialize_enum(self, var, ctx, t):
+ self._generate_serialize_type(var, ctx, t.value_type)
+
+ def _generate_serialize_string(self, var, ctx, t):
+ tmpl = '_write_cstring({}, {});'.format(ctx, var)
+ self._cg.add_lines(tmpl)
+
+ def _generate_serialize_type(self, var, ctx, t):
+ self._type_to_generate_serialize_func[type(t)](var, ctx, t)
+ self._sasa.write_type(t)
+
+ def _generate_func_serialize_event_from_entity(self, prefix, t,
+ spec_src=None):
+ self._cg.add_line('{')
+ self._cg.indent()
+ self._cg.add_cc_line('align structure')
+ self._sasa.reset()
+ self._generate_align_type('ctx->at', t)
+
+ for field_name, field_type in t.fields.items():
+ src = prefix + field_name
+
+ if spec_src is not None:
+ if field_name in spec_src:
+ src = spec_src[field_name]
+
+ self._cg.add_empty_line()
+ self._generate_field_name_cc_line(field_name)
+ self._generate_align_type('ctx->at', field_type)
+ self._generate_serialize_type(src, 'ctx', field_type)
+
+ self._cg.unindent()
+ self._cg.add_line('}')
+ self._cg.add_empty_line()
+
+ def _generate_func_serialize_event(self, stream, event):
+ self._generate_func_serialize_event_proto(stream, event)
+ tmpl = templates._FUNC_SERIALIZE_EVENT_BODY_BEGIN
+ self._cg.add_lines(tmpl)
+ self._cg.indent()
+
+ if stream.event_header_type is not None:
+ t = stream.event_header_type
+ exclude_list = ['timestamp', 'id']
+ params = self._get_call_event_param_list_from_struct(t, _PREFIX_SEH,
+ exclude_list)
+ tmpl = '_serialize_stream_event_header_{sname}(ctx, {evid}{params});'
+ self._cg.add_cc_line('stream event header')
+ self._cg.add_line(tmpl.format(sname=stream.name, evid=event.id,
+ params=params))
+ self._cg.add_empty_line()
+
+ if stream.event_context_type is not None:
+ t = stream.event_context_type
+ params = self._get_call_event_param_list_from_struct(t, _PREFIX_SEH,
+ exclude_list)
+ tmpl = '_serialize_stream_event_context_{sname}(ctx{params});'
+ self._cg.add_cc_line('stream event context')
+ self._cg.add_line(tmpl.format(sname=stream.name, params=params))
+ self._cg.add_empty_line()
+
+ if event.context_type is not None:
+ self._cg.add_cc_line('event context')
+ self._generate_func_serialize_event_from_entity(_PREFIX_EC,
+ event.context_type)
+
+ if event.payload_type is not None:
+ self._cg.add_cc_line('event payload')
+ self._generate_func_serialize_event_from_entity(_PREFIX_EP,
+ event.payload_type)
+
+ self._cg.unindent()
+ tmpl = templates._FUNC_SERIALIZE_EVENT_BODY_END
+ self._cg.add_lines(tmpl)
+
+ def _generate_func_serialize_stream_event_header_proto(self, stream):
+ tmpl = templates._FUNC_SERIALIZE_STREAM_EVENT_HEADER_PROTO_BEGIN
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix,
+ sname=stream.name))
+
+ if stream.event_header_type is not None:
+ exclude_list = [
+ 'id',
+ 'timestamp',
+ ]
+ self._generate_proto_params(stream.event_header_type,
+ _PREFIX_SEH, exclude_list)
+
+ tmpl = templates._FUNC_SERIALIZE_STREAM_EVENT_HEADER_PROTO_END
+ self._cg.add_lines(tmpl)
+
+ def _generate_func_serialize_stream_event_context_proto(self, stream):
+ tmpl = templates._FUNC_SERIALIZE_STREAM_EVENT_CONTEXT_PROTO_BEGIN
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix,
+ sname=stream.name))
+
+ if stream.event_context_type is not None:
+ self._generate_proto_params(stream.event_context_type,
+ _PREFIX_SEC, [])
+
+ tmpl = templates._FUNC_SERIALIZE_STREAM_EVENT_CONTEXT_PROTO_END
+ self._cg.add_lines(tmpl)
+
+ def _generate_func_serialize_stream_event_header(self, stream):
+ self._generate_func_serialize_stream_event_header_proto(stream)
+ tmpl = templates._FUNC_SERIALIZE_STREAM_EVENT_HEADER_BODY_BEGIN
+ self._cg.add_lines(tmpl)
+ self._cg.indent()
+
+ if stream.event_header_type is not None:
+ if 'timestamp' in stream.event_header_type.fields:
+ timestamp = stream.event_header_type.fields['timestamp']
+ ts_ctype = self._get_int_ctype(timestamp)
+ clock = timestamp.property_mappings[0].object
+ clock_name = clock.name
+ clock_ctype = clock.return_ctype
+ tmpl = '{} ts = ctx->cbs.{}_clock_get_value(ctx->data);'
+ self._cg.add_line(tmpl.format(clock_ctype, clock_name))
+
+ self._cg.add_empty_line()
+ func = self._generate_func_serialize_event_from_entity
+
+ if stream.event_header_type is not None:
+ spec_src = {}
+
+ if 'id' in stream.event_header_type.fields:
+ id_t = stream.event_header_type.fields['id']
+ id_t_ctype = self._get_int_ctype(id_t)
+ spec_src['id'] = '({}) event_id'.format(id_t_ctype)
+
+ if 'timestamp' in stream.event_header_type.fields:
+ spec_src['timestamp'] = '({}) ts'.format(ts_ctype)
+
+ func(_PREFIX_SEH, stream.event_header_type, spec_src)
+
+ self._cg.unindent()
+ tmpl = templates._FUNC_SERIALIZE_STREAM_EVENT_HEADER_BODY_END
+ self._cg.add_lines(tmpl)
+
+ def _generate_func_serialize_stream_event_context(self, stream):
+ self._generate_func_serialize_stream_event_context_proto(stream)
+ tmpl = templates._FUNC_SERIALIZE_STREAM_EVENT_CONTEXT_BODY_BEGIN
+ self._cg.add_lines(tmpl)
+ self._cg.indent()
+ func = self._generate_func_serialize_event_from_entity
+
+ if stream.event_context_type is not None:
+ func(_PREFIX_SEC, stream.event_context_type)
+
+ self._cg.unindent()
+ tmpl = templates._FUNC_SERIALIZE_STREAM_EVENT_CONTEXT_BODY_END
+ self._cg.add_lines(tmpl)
+
+ def _generate_func_trace(self, stream, event):
+ self._generate_func_trace_proto(stream, event)
+ params = self._get_call_event_param_list(stream, event)
+ tmpl = templates._FUNC_TRACE_BODY
+ self._cg.add_lines(tmpl.format(sname=stream.name, evname=event.name,
+ params=params))
+
+ def _generate_func_init(self):
+ self._generate_func_init_proto()
+ tmpl = templates._FUNC_INIT_BODY
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix))
+
+ def _generate_field_name_cc_line(self, field_name):
+ self._cg.add_cc_line('"{}" field'.format(field_name))
+
+ def _save_byte_offset(self, name):
+ self._saved_byte_offsets[name] = self._sasa.byte_offset
+
+ def _restore_byte_offset(self, name):
+ self._sasa.byte_offset = self._saved_byte_offsets[name]
+
+ def _generate_func_open(self, stream):
+ def generate_save_offset(name):
+ tmpl = 'ctx->off_spc_{} = ctx->parent.at;'.format(name)
+ self._cg.add_line(tmpl)
+ self._save_byte_offset(name)
+
+ self._generate_func_open_proto(stream)
+ tmpl = templates._FUNC_OPEN_BODY_BEGIN
+ self._cg.add_lines(tmpl)
+ self._cg.indent()
+ tph_type = self._cfg.metadata.trace.packet_header_type
+ spc_type = stream.packet_context_type
+
+ if spc_type is not None and 'timestamp_begin' in spc_type.fields:
+ field = spc_type.fields['timestamp_begin']
+ tmpl = '{} ts = ctx->parent.cbs.{}_clock_get_value(ctx->parent.data);'
+ clock = field.property_mappings[0].object
+ clock_ctype = clock.return_ctype
+ clock_name = clock.name
+ self._cg.add_line(tmpl.format(clock_ctype, clock_name))
+ self._cg.add_empty_line()
+
+ self._cg.add_cc_line('do not open a packet that is already open')
+ self._cg.add_line('if (ctx->parent.packet_is_open) {')
+ self._cg.indent()
+ self._cg.add_line('return;')
+ self._cg.unindent()
+ self._cg.add_line('}')
+ self._cg.add_empty_line()
+ self._cg.add_line('ctx->parent.at = 0;')
+
+ if tph_type is not None:
+ self._cg.add_empty_line()
+ self._cg.add_cc_line('trace packet header')
+ self._cg.add_line('{')
+ self._cg.indent()
+ self._cg.add_cc_line('align structure')
+ self._sasa.reset()
+ self._generate_align_type('ctx->parent.at', tph_type)
+
+ for field_name, field_type in tph_type.fields.items():
+ src = _PREFIX_TPH + field_name
+
+ if field_name == 'magic':
+ src = '0xc1fc1fc1UL'
+ elif field_name == 'stream_id':
+ stream_id_ctype = self._get_int_ctype(field_type)
+ src = '({}) {}'.format(stream_id_ctype, stream.id)
+ elif field_name == 'uuid':
+ self._cg.add_empty_line()
+ self._generate_field_name_cc_line(field_name)
+ self._cg.add_line('{')
+ self._cg.indent()
+ self._cg.add_line('static uint8_t uuid[] = {')
+ self._cg.indent()
+
+ for b in self._cfg.metadata.trace.uuid.bytes:
+ self._cg.add_line('{},'.format(b))
+
+ self._cg.unindent()
+ self._cg.add_line('};')
+ self._cg.add_empty_line()
+ self._generate_align('ctx->parent.at', 8)
+ line = 'memcpy(&ctx->parent.buf[_BITS_TO_BYTES(ctx->parent.at)], uuid, 16);'
+ self._cg.add_line(line)
+ self._generate_incr_pos_bytes('ctx->parent.at', 16)
+ self._cg.unindent()
+ self._cg.add_line('}')
+ self._sasa.reset()
+ continue
+
+ self._cg.add_empty_line()
+ self._generate_field_name_cc_line(field_name)
+ self._generate_align_type('ctx->parent.at', field_type)
+ self._generate_serialize_type(src, '(&ctx->parent)', field_type)
+
+ self._cg.unindent()
+ self._cg.add_lines('}')
+
+ if spc_type is not None:
+ self._cg.add_empty_line()
+ self._cg.add_cc_line('stream packet context')
+ self._cg.add_line('{')
+ self._cg.indent()
+ self._cg.add_cc_line('align structure')
+ self._sasa.reset()
+ self._generate_align_type('ctx->parent.at', spc_type)
+ tmpl_off = 'off_spc_{fname}'
+
+ for field_name, field_type in spc_type.fields.items():
+ src = _PREFIX_SPC + field_name
+ skip_int = False
+ self._cg.add_empty_line()
+ self._generate_field_name_cc_line(field_name)
+
+ if field_name == 'timestamp_begin':
+ ctype = self._get_type_ctype(field_type)
+ src = '({}) ts'.format(ctype)
+ elif field_name in ['timestamp_end', 'content_size',
+ 'events_discarded']:
+ skip_int = True
+ elif field_name == 'packet_size':
+ ctype = self._get_type_ctype(field_type)
+ src = '({}) ctx->parent.packet_size'.format(ctype)
+
+ self._generate_align_type('ctx->parent.at', field_type)
+
+ if skip_int:
+ generate_save_offset(field_name)
+ self._generate_incr_pos('ctx->parent.at', field_type.size)
+ self._sasa.write_type(field_type)
+ else:
+ self._generate_serialize_type(src, '(&ctx->parent)',
+ field_type)
+
+ self._cg.unindent()
+ self._cg.add_lines('}')
+
+ self._cg.unindent()
+ tmpl = templates._FUNC_OPEN_BODY_END
+ self._cg.add_lines(tmpl)
+
+ def _generate_func_close(self, stream):
+ def generate_goto_offset(name):
+ tmpl = 'ctx->parent.at = ctx->off_spc_{};'.format(name)
+ self._cg.add_line(tmpl)
+
+ self._generate_func_close_proto(stream)
+ tmpl = templates._FUNC_CLOSE_BODY_BEGIN
+ self._cg.add_lines(tmpl)
+ self._cg.indent()
+ spc_type = stream.packet_context_type
+
+ if spc_type is not None:
+ if 'timestamp_end' in spc_type.fields:
+ tmpl = '{} ts = ctx->parent.cbs.{}_clock_get_value(ctx->parent.data);'
+ field = spc_type.fields['timestamp_end']
+ clock = field.property_mappings[0].object
+ clock_ctype = clock.return_ctype
+ clock_name = clock.name
+ self._cg.add_line(tmpl.format(clock_ctype, clock_name))
+ self._cg.add_empty_line()
+
+ self._cg.add_cc_line('do not close a packet that is not open')
+ self._cg.add_line('if (!ctx->parent.packet_is_open) {')
+ self._cg.indent()
+ self._cg.add_line('return;')
+ self._cg.unindent()
+ self._cg.add_line('}')
+ self._cg.add_empty_line()
+ self._cg.add_cc_line('save content size')
+ self._cg.add_line('ctx->parent.content_size = ctx->parent.at;')
+
+ if spc_type is not None:
+ field_name = 'timestamp_end'
+
+ if field_name in spc_type.fields:
+ t = spc_type.fields[field_name]
+ ctype = self._get_type_ctype(t)
+ src = '({}) ts'.format(ctype)
+ self._cg.add_empty_line()
+ self._generate_field_name_cc_line(field_name)
+ generate_goto_offset(field_name)
+ self._restore_byte_offset(field_name)
+ self._generate_serialize_type(src, '(&ctx->parent)', t)
+
+ field_name = 'content_size'
+
+ if 'content_size' in spc_type.fields:
+ t = spc_type.fields[field_name]
+ ctype = self._get_type_ctype(t)
+ src = '({}) ctx->parent.content_size'.format(ctype)
+ self._cg.add_empty_line()
+ self._generate_field_name_cc_line(field_name)
+ generate_goto_offset(field_name)
+ self._restore_byte_offset(field_name)
+ self._generate_serialize_type(src, '(&ctx->parent)', t)
+
+ field_name = 'events_discarded'
+
+ if field_name in spc_type.fields:
+ t = spc_type.fields[field_name]
+ ctype = self._get_type_ctype(t)
+ src = '({}) ctx->parent.events_discarded'.format(ctype)
+ self._cg.add_empty_line()
+ self._generate_field_name_cc_line(field_name)
+ generate_goto_offset(field_name)
+ self._restore_byte_offset(field_name)
+ self._generate_serialize_type(src, '(&ctx->parent)', t)
+
+ self._cg.unindent()
+ tmpl = templates._FUNC_CLOSE_BODY_END
+ self._cg.add_lines(tmpl)
+ self._sasa.reset()
+
+ def generate_c_src(self):
+ self._cg.reset()
+ dt = datetime.datetime.now().isoformat()
+ header_filename = self.get_header_filename()
+ tmpl = templates._C_SRC
+ self._cg.add_lines(tmpl.format(prefix=self._cfg.prefix,
+ header_filename=header_filename,
+ version=barectf.__version__, date=dt))
+ self._cg.add_empty_line()
+
+ # initialization function
+ self._generate_func_init()
+ self._cg.add_empty_line()
+
+ for stream in self._cfg.metadata.streams.values():
+ self._generate_func_open(stream)
+ self._cg.add_empty_line()
+ self._generate_func_close(stream)
+ self._cg.add_empty_line()
+
+ if stream.event_header_type is not None:
+ self._generate_func_serialize_stream_event_header(stream)
+ self._cg.add_empty_line()
+
+ if stream.event_context_type is not None:
+ self._generate_func_serialize_stream_event_context(stream)
+ self._cg.add_empty_line()
+
+ for ev in stream.events.values():
+ self._generate_func_get_event_size(stream, ev)
+ self._cg.add_empty_line()
+ self._generate_func_serialize_event(stream, ev)
+ self._cg.add_empty_line()
+ self._generate_func_trace(stream, ev)
+ self._cg.add_empty_line()
+
+
+ return self._cg.code
+
+ def get_header_filename(self):
+ return '{}.h'.format(self._cfg.prefix.rstrip('_'))
+
+ def get_bitfield_header_filename(self):
+ return '{}-bitfield.h'.format(self._cfg.prefix.rstrip('_'))
--- /dev/null
+# The MIT License (MIT)
+#
+# Copyright (c) 2015 Philippe Proulx <pproulx@efficios.com>
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+
+import enum
+import collections
+
+
+@enum.unique
+class ByteOrder(enum.Enum):
+ LE = 0
+ BE = 1
+
+
+@enum.unique
+class Encoding(enum.Enum):
+ NONE = 0
+ UTF8 = 1
+ ASCII = 2
+
+
+class Type:
+ @property
+ def align(self):
+ raise NotImplementedError()
+
+ @property
+ def size(self):
+ raise NotImplementedError()
+
+ @size.setter
+ def size(self, value):
+ self._size = value
+
+
+class PropertyMapping:
+ def __init__(self, object, prop):
+ self._object = object
+ self._prop = prop
+
+ @property
+ def object(self):
+ return self._object
+
+ @object.setter
+ def object(self, value):
+ self._object = value
+
+ @property
+ def prop(self):
+ return self.prop
+
+ @prop.setter
+ def prop(self, value):
+ self.prop = value
+
+
+class Integer(Type):
+ def __init__(self):
+ self._size = None
+ self._align = None
+ self._signed = False
+ self._byte_order = None
+ self._base = 10
+ self._encoding = Encoding.NONE
+ self._property_mappings = []
+
+ @property
+ def signed(self):
+ return self._signed
+
+ @signed.setter
+ def signed(self, value):
+ self._signed = value
+
+ @property
+ def byte_order(self):
+ return self._byte_order
+
+ @byte_order.setter
+ def byte_order(self, value):
+ self._byte_order = value
+
+ @property
+ def base(self):
+ return self._base
+
+ @base.setter
+ def base(self, value):
+ self._base = value
+
+ @property
+ def encoding(self):
+ return self._encoding
+
+ @encoding.setter
+ def encoding(self, value):
+ self._encoding = value
+
+ @property
+ def align(self):
+ if self._align is None:
+ if self._size is None:
+ return None
+ else:
+ if self._size % 8 == 0:
+ return 8
+ else:
+ return 1
+ else:
+ return self._align
+
+ @align.setter
+ def align(self, value):
+ self._align = value
+
+ @property
+ def size(self):
+ return self._size
+
+ @size.setter
+ def size(self, value):
+ self._size = value
+
+ @property
+ def property_mappings(self):
+ return self._property_mappings
+
+
+class FloatingPoint(Type):
+ def __init__(self):
+ self._exp_size = None
+ self._mant_size = None
+ self._align = 8
+ self._byte_order = None
+
+ @property
+ def exp_size(self):
+ return self._exp_size
+
+ @exp_size.setter
+ def exp_size(self, value):
+ self._exp_size = value
+
+ @property
+ def mant_size(self):
+ return self._mant_size
+
+ @mant_size.setter
+ def mant_size(self, value):
+ self._mant_size = value
+
+ @property
+ def size(self):
+ return self._exp_size + self._mant_size
+
+ @property
+ def byte_order(self):
+ return self._byte_order
+
+ @byte_order.setter
+ def byte_order(self, value):
+ self._byte_order = value
+
+ @property
+ def align(self):
+ return self._align
+
+ @align.setter
+ def align(self, value):
+ self._align = value
+
+
+class Enum(Type):
+ def __init__(self):
+ self._value_type = None
+ self._members = collections.OrderedDict()
+
+ @property
+ def align(self):
+ return self._value_type.align
+
+ @property
+ def size(self):
+ return self._value_type.size
+
+ @property
+ def value_type(self):
+ return self._value_type
+
+ @value_type.setter
+ def value_type(self, value):
+ self._value_type = value
+
+ @property
+ def members(self):
+ return self._members
+
+ def value_of(self, label):
+ return self._members[label]
+
+ def label_of(self, value):
+ for label, vrange in self._members.items():
+ if value >= vrange[0] and value <= vrange[1]:
+ return label
+
+ def __getitem__(self, key):
+ if type(key) is str:
+ return self.value_of(key)
+ elif type(key) is int:
+ return self.label_of(key)
+
+ raise TypeError('wrong subscript type')
+
+
+class String(Type):
+ def __init__(self):
+ self._encoding = Encoding.UTF8
+
+ @property
+ def size(self):
+ return None
+
+ @property
+ def align(self):
+ return 8
+
+ @property
+ def encoding(self):
+ return self._encoding
+
+ @encoding.setter
+ def encoding(self, value):
+ self._encoding = value
+
+
+class Array(Type):
+ def __init__(self):
+ self._element_type = None
+ self._length = None
+
+ @property
+ def align(self):
+ return self._element_type.align
+
+ @property
+ def element_type(self):
+ return self._element_type
+
+ @element_type.setter
+ def element_type(self, value):
+ self._element_type = value
+
+ @property
+ def length(self):
+ return self._length
+
+ @length.setter
+ def length(self, value):
+ self._length = value
+
+ @property
+ def is_static(self):
+ return type(self._length) is int
+
+ @property
+ def size(self):
+ if self.length == 0:
+ return 0
+
+ element_sz = self.element_type.size
+
+ if element_sz is None:
+ return None
+
+ # TODO: compute static size here
+ return None
+
+
+class Struct(Type):
+ def __init__(self):
+ self._min_align = 1
+ self._fields = collections.OrderedDict()
+
+ @property
+ def min_align(self):
+ return self._min_align
+
+ @min_align.setter
+ def min_align(self, value):
+ self._min_align = value
+
+ @property
+ def align(self):
+ fields_max = max([f.align for f in self.fields.values()] + [1])
+
+ return max(fields_max, self._min_align)
+
+ @property
+ def size(self):
+ # TODO: compute static size here (if available)
+ return None
+
+ @property
+ def fields(self):
+ return self._fields
+
+ def __getitem__(self, key):
+ return self.fields[key]
+
+
+class Variant(Type):
+ def __init__(self):
+ self._tag = None
+ self._types = collections.OrderedDict()
+
+ @property
+ def align(self):
+ return 1
+
+ @property
+ def size(self):
+ if len(self.members) == 1:
+ return list(self.members.values())[0].size
+
+ return None
+
+ @property
+ def tag(self):
+ return self._tag
+
+ @tag.setter
+ def tag(self, value):
+ self._tag = value
+
+ @property
+ def types(self):
+ return self._types
+
+ def __getitem__(self, key):
+ return self.types[key]
+
+
+class Trace:
+ def __init__(self):
+ self._byte_order = None
+ self._packet_header_type = None
+ self._uuid = None
+
+ @property
+ def uuid(self):
+ return self._uuid
+
+ @uuid.setter
+ def uuid(self, value):
+ self._uuid = value
+
+ @property
+ def byte_order(self):
+ return self._byte_order
+
+ @byte_order.setter
+ def byte_order(self, value):
+ self._byte_order = value
+
+ @property
+ def packet_header_type(self):
+ return self._packet_header_type
+
+ @packet_header_type.setter
+ def packet_header_type(self, value):
+ self._packet_header_type = value
+
+
+class Env(collections.OrderedDict):
+ pass
+
+
+class Clock:
+ def __init__(self):
+ self._name = None
+ self._uuid = None
+ self._description = None
+ self._freq = 1000000000
+ self._error_cycles = 0
+ self._offset_seconds = 0
+ self._offset_cycles = 0
+ self._absolute = False
+ self._return_ctype = 'uint32_t'
+
+ @property
+ def name(self):
+ return self._name
+
+ @name.setter
+ def name(self, value):
+ self._name = value
+
+ @property
+ def uuid(self):
+ return self._uuid
+
+ @uuid.setter
+ def uuid(self, value):
+ self._uuid = value
+
+ @property
+ def description(self):
+ return self._description
+
+ @description.setter
+ def description(self, value):
+ self._description = value
+
+ @property
+ def error_cycles(self):
+ return self._error_cycles
+
+ @error_cycles.setter
+ def error_cycles(self, value):
+ self._error_cycles = value
+
+ @property
+ def freq(self):
+ return self._freq
+
+ @freq.setter
+ def freq(self, value):
+ self._freq = value
+
+ @property
+ def offset_seconds(self):
+ return self._offset_seconds
+
+ @offset_seconds.setter
+ def offset_seconds(self, value):
+ self._offset_seconds = value
+
+ @property
+ def offset_cycles(self):
+ return self._offset_cycles
+
+ @offset_cycles.setter
+ def offset_cycles(self, value):
+ self._offset_cycles = value
+
+ @property
+ def absolute(self):
+ return self._absolute
+
+ @absolute.setter
+ def absolute(self, value):
+ self._absolute = value
+
+
+class Event:
+ def __init__(self):
+ self._id = None
+ self._name = None
+ self._log_level = None
+ self._context_type = None
+ self._payload_type = None
+
+ @property
+ def id(self):
+ return self._id
+
+ @id.setter
+ def id(self, value):
+ self._id = value
+
+ @property
+ def name(self):
+ return self._name
+
+ @name.setter
+ def name(self, value):
+ self._name = value
+
+ @property
+ def log_level(self):
+ return self._log_level
+
+ @log_level.setter
+ def log_level(self, value):
+ self._log_level = value
+
+ @property
+ def context_type(self):
+ return self._context_type
+
+ @context_type.setter
+ def context_type(self, value):
+ self._context_type = value
+
+ @property
+ def payload_type(self):
+ return self._payload_type
+
+ @payload_type.setter
+ def payload_type(self, value):
+ self._payload_type = value
+
+ def __getitem__(self, key):
+ if type(self.payload_type) in [Struct, Variant]:
+ return self.payload_type[key]
+
+ raise TypeError('{} is not subscriptable')
+
+
+class Stream:
+ def __init__(self):
+ self._id = 0
+ self._name = None
+ self._packet_context_type = None
+ self._event_header_type = None
+ self._event_context_type = None
+ self._events = collections.OrderedDict()
+
+ @property
+ def name(self):
+ return self._name
+
+ @name.setter
+ def name(self, value):
+ self._name = value
+
+ @property
+ def id(self):
+ return self._id
+
+ @id.setter
+ def id(self, value):
+ self._id = value
+
+ @property
+ def packet_context_type(self):
+ return self._packet_context_type
+
+ @packet_context_type.setter
+ def packet_context_type(self, value):
+ self._packet_context_type = value
+
+ @property
+ def event_header_type(self):
+ return self._event_header_type
+
+ @event_header_type.setter
+ def event_header_type(self, value):
+ self._event_header_type = value
+
+ @property
+ def event_context_type(self):
+ return self._event_context_type
+
+ @event_context_type.setter
+ def event_context_type(self, value):
+ self._event_context_type = value
+
+ @property
+ def events(self):
+ return self._events
+
+
+class Metadata:
+ def __init__(self):
+ self._trace = None
+ self._env = collections.OrderedDict()
+ self._clocks = collections.OrderedDict()
+ self._streams = collections.OrderedDict()
+
+ @property
+ def trace(self):
+ return self._trace
+
+ @trace.setter
+ def trace(self, value):
+ self._trace = value
+
+ @property
+ def env(self):
+ return self._env
+
+ @env.setter
+ def env(self, value):
+ self._env = value
+
+ @property
+ def clocks(self):
+ return self._clocks
+
+ @clocks.setter
+ def clocks(self, value):
+ self._clocks = value
+
+ @property
+ def streams(self):
+ return self._streams
+
+ @streams.setter
+ def streams(self, value):
+ self._streams = value
-BARECTF_CTX = """struct {prefix}{sid}_ctx {{
+_CLOCK_CB = '{return_ctype} (*{cname}_clock_get_value)(void *);'
+
+
+_PLATFORM_CALLBACKS_BEGIN = '''/* barectf platform callbacks */
+struct {prefix}platform_callbacks {{
+ /* clock callbacks */'''
+
+
+_PLATFORM_CALLBACKS_END = '''
+ /* is back-end full? */
+ int (*is_backend_full)(void *);
+
+ /* open packet */
+ void (*open_packet)(void *);
+
+ /* close packet */
+ void (*close_packet)(void *);
+};'''
+
+
+_CTX_PARENT = '''/* common barectf context */
+struct {prefix}ctx {{
+ /* platform callbacks */
+ struct {prefix}platform_callbacks cbs;
+
+ /* platform data (passed to callbacks) */
+ void *data;
+
/* output buffer (will contain a CTF binary packet) */
- uint8_t* buf;
+ uint8_t *buf;
- /* buffer size in bits */
+ /* packet size in bits */
uint32_t packet_size;
- /* current position from beginning of buffer in bits */
+ /* content size in bits */
+ uint32_t content_size;
+
+ /* current position from beginning of packet in bits */
uint32_t at;
- /* clock value callback */
-{clock_cb}
+ /* packet header + context size (content offset) */
+ uint32_t off_content;
+
+ /* events discarded */
+ uint32_t events_discarded;
+
+ /* current packet is opened */
+ int packet_is_open;
+}};'''
+
+
+_CTX_BEGIN = '''/* context for stream "{sname}" */
+struct {prefix}{sname}_ctx {{
+ /* parent */
+ struct {prefix}ctx parent;
+
+ /* config-specific members follow */'''
+
+
+_CTX_END = '};'
+
+
+_FUNC_INIT_PROTO = '''/* initialize context */
+void {prefix}init(
+ void *ctx,
+ uint8_t *buf,
+ uint32_t buf_size,
+ struct {prefix}platform_callbacks cbs,
+ void *data
+)'''
+
+
+_FUNC_INIT_BODY = '''{{
+ struct {prefix}ctx *{prefix}ctx = ctx;
+ {prefix}ctx->cbs = cbs;
+ {prefix}ctx->data = data;
+ {prefix}ctx->buf = buf;
+ {prefix}ctx->packet_size = _BYTES_TO_BITS(buf_size);
+ {prefix}ctx->at = 0;
+ {prefix}ctx->events_discarded = 0;
+ {prefix}ctx->packet_is_open = 0;
+}}'''
+
+
+_FUNC_OPEN_PROTO_BEGIN = '''/* open packet for stream "{sname}" */
+void {prefix}{sname}_open_packet(
+ struct {prefix}{sname}_ctx *ctx'''
+
+
+_FUNC_OPEN_PROTO_END = ')'
+
+
+_FUNC_OPEN_BODY_BEGIN = '{'
+
+
+_FUNC_OPEN_BODY_END = '''
+ ctx->parent.off_content = ctx->parent.at;
+
+ /* mark current packet as open */
+ ctx->parent.packet_is_open = 1;
+}'''
+
+
+_FUNC_CLOSE_PROTO = '''/* close packet for stream "{sname}" */
+void {prefix}{sname}_close_packet(struct {prefix}{sname}_ctx *ctx)'''
+
+
+_FUNC_CLOSE_BODY_BEGIN = '{'
+
+
+_FUNC_CLOSE_BODY_END = '''
+ /* go back to end of packet */
+ ctx->parent.at = ctx->parent.packet_size;
+
+ /* mark packet as closed */
+ ctx->parent.packet_is_open = 0;
+}'''
+
+
+_FUNC_TRACE_PROTO_BEGIN = '''/* trace (stream "{sname}", event "{evname}") */
+void {prefix}{sname}_trace_{evname}(
+ struct {prefix}{sname}_ctx *ctx'''
+
+
+_FUNC_TRACE_PROTO_END = ')'
+
+
+_FUNC_TRACE_BODY = '''{{
+ uint32_t ev_size;
+
+ /* get event size */
+ ev_size = _get_event_size_{sname}_{evname}((void *) ctx{params});
+
+ /* do we have enough space to serialize? */
+ if (!_reserve_event_space((void *) ctx, ev_size)) {{
+ /* no: forget this */
+ return;
+ }}
+
+ /* serialize event */
+ _serialize_event_{sname}_{evname}((void *) ctx{params});
+
+ /* commit event */
+ _commit_event((void *) ctx);
+}}'''
+
+
+_FUNC_GET_EVENT_SIZE_PROTO_BEGIN = '''static uint32_t _get_event_size_{sname}_{evname}(
+ struct {prefix}ctx *ctx'''
+
+
+_FUNC_GET_EVENT_SIZE_PROTO_END = ')'
+
+
+_FUNC_GET_EVENT_SIZE_BODY_BEGIN = '''{
+ uint32_t at = ctx->at;'''
+
+
+_FUNC_GET_EVENT_SIZE_BODY_END = ''' return at - ctx->at;
+}'''
+
+
+_FUNC_SERIALIZE_STREAM_EVENT_HEADER_PROTO_BEGIN = '''static void _serialize_stream_event_header_{sname}(
+ struct {prefix}ctx *ctx,
+ uint32_t event_id'''
- /* packet header + context size */
- uint32_t packet_header_context_size;
- /* config-specific members follow */
-{ctx_fields}
-}};"""
+_FUNC_SERIALIZE_STREAM_EVENT_HEADER_PROTO_END = ')'
-FUNC_INIT = """{si}int {prefix}{sid}_init(
- struct {prefix}{sid}_ctx* ctx,
- uint8_t* buf,
- uint32_t buf_size{params}
-)"""
-FUNC_OPEN = """{si}int {prefix}{sid}_open_packet(
- struct {prefix}{sid}_ctx* ctx{params}
-)"""
+_FUNC_SERIALIZE_STREAM_EVENT_HEADER_BODY_BEGIN = '{'
-FUNC_CLOSE = """{si}int {prefix}{sid}_close_packet(
- struct {prefix}{sid}_ctx* ctx{params}
-)"""
-FUNC_TRACE = """{si}int {prefix}{sid}_trace_{evname}(
- struct {prefix}{sid}_ctx* ctx{params}
-)"""
+_FUNC_SERIALIZE_STREAM_EVENT_HEADER_BODY_END = '}'
-WRITE_INTEGER = """{ucprefix}_CHK_OFFSET_V(ctx->at, ctx->packet_size, {sz});
-{prefix}_write_integer_{signed}_{bo}(ctx->buf, ctx->at, {sz}, {src_name});
-ctx->at += {sz};"""
-HEADER = """#ifndef _{ucprefix}_H
-#define _{ucprefix}_H
+_FUNC_SERIALIZE_STREAM_EVENT_CONTEXT_PROTO_BEGIN = '''static void _serialize_stream_event_context_{sname}(
+ struct {prefix}ctx *ctx'''
+
+
+_FUNC_SERIALIZE_STREAM_EVENT_CONTEXT_PROTO_END = ')'
+
+
+_FUNC_SERIALIZE_STREAM_EVENT_CONTEXT_BODY_BEGIN = '{'
+
+
+_FUNC_SERIALIZE_STREAM_EVENT_CONTEXT_BODY_END = '}'
+
+
+_FUNC_SERIALIZE_EVENT_PROTO_BEGIN = '''static void _serialize_event_{sname}_{evname}(
+ struct {prefix}ctx *ctx'''
+
+
+_FUNC_SERIALIZE_EVENT_PROTO_END = ')'
+
+
+_FUNC_SERIALIZE_EVENT_BODY_BEGIN = '{'
+
+
+_FUNC_SERIALIZE_EVENT_BODY_END = '}'
+
+
+_HEADER_BEGIN = '''#ifndef _{ucprefix}H
+#define _{ucprefix}H
+
+/*
+ * The following C code was generated by barectf {version}
+ * on {date}.
+ *
+ * For more details, see <https://github.com/efficios/barectf>.
+ */
#include <stdint.h>
-#include <string.h>
-#include "{prefix}_bitfield.h"
+#include "{bitfield_header_filename}"
-/* barectf contexts */
-{barectf_ctx}
+struct {prefix}ctx;
-/* barectf error codes */
-#define E{ucprefix}_OK 0
-#define E{ucprefix}_NOSPC 1
+uint32_t {prefix}packet_size(void *ctx);
+int {prefix}packet_is_full(void *ctx);
+int {prefix}packet_is_empty(void *ctx);
+uint32_t {prefix}packet_events_discarded(void *ctx);
+uint8_t *{prefix}packet_buf(void *ctx);
+void {prefix}packet_set_buf(void *ctx, uint8_t *buf, uint32_t buf_size);
+uint32_t {prefix}packet_buf_size(void *ctx);
+int {prefix}packet_is_open(void *ctx);'''
-/* alignment macro */
-#define {ucprefix}_ALIGN_OFFSET(_at, _align) \\
- do {{ \\
- _at = ((_at) + (_align - 1)) & -_align; \\
- }} while (0)
-/* buffer overflow check macro */
-#define {ucprefix}_CHK_OFFSET_V(_at, _bufsize, _size) \\
- do {{ \\
- if ((_at) + (_size) > (_bufsize)) {{ \\
- _at = ctx_at_begin; \\
- return -E{ucprefix}_NOSPC; \\
- }} \\
- }} while (0)
+_HEADER_END = '#endif /* _{ucprefix}H */'
-/* generated functions follow */
-{functions}
-#endif /* _{ucprefix}_H */
-"""
+_C_SRC = '''/*
+ * The following C code was generated by barectf {version}
+ * on {date}.
+ *
+ * For more details, see <https://github.com/efficios/barectf>.
+ */
-CSRC = """#include <stdint.h>
+#include <stdint.h>
#include <string.h>
+#include <assert.h>
+
+#include "{header_filename}"
+
+#define _ALIGN(_at, _align) \\
+ do {{ \\
+ (_at) = ((_at) + ((_align) - 1)) & -(_align); \\
+ }} while (0)
-#include "{prefix}.h"
+#define _BITS_TO_BYTES(_x) ((_x) >> 3)
+#define _BYTES_TO_BITS(_x) ((_x) << 3)
-{functions}
-"""
+uint32_t {prefix}packet_size(void *ctx)
+{{
+ return ((struct {prefix}ctx *) ctx)->packet_size;
+}}
-BITFIELD = """#ifndef _$PREFIX$_BITFIELD_H
-#define _$PREFIX$_BITFIELD_H
+int {prefix}packet_is_full(void *ctx)
+{{
+ struct {prefix}ctx *cctx = ctx;
+
+ return cctx->at == cctx->packet_size;
+}}
+
+int {prefix}packet_is_empty(void *ctx)
+{{
+ struct {prefix}ctx *cctx = ctx;
+
+ return cctx->at <= cctx->off_content;
+}}
+
+uint32_t {prefix}packet_events_discarded(void *ctx)
+{{
+ return ((struct {prefix}ctx *) ctx)->events_discarded;
+}}
+
+uint8_t *{prefix}packet_buf(void *ctx)
+{{
+ return ((struct {prefix}ctx *) ctx)->buf;
+}}
+
+uint32_t {prefix}packet_buf_size(void *ctx)
+{{
+ return _BITS_TO_BYTES(((struct {prefix}ctx *) ctx)->packet_size);
+}}
+
+void {prefix}packet_set_buf(void *ctx, uint8_t *buf, uint32_t buf_size)
+{{
+ struct {prefix}ctx *{prefix}ctx = ctx;
+
+ {prefix}ctx->buf = buf;
+ {prefix}ctx->packet_size = _BYTES_TO_BITS(buf_size);
+}}
+
+int {prefix}packet_is_open(void *ctx)
+{{
+ return ((struct {prefix}ctx *) ctx)->packet_is_open;
+}}
+
+static
+void _write_cstring(struct barectf_ctx *ctx, const char *src)
+{{
+ uint32_t sz = strlen(src) + 1;
+
+ memcpy(&ctx->buf[_BITS_TO_BYTES(ctx->at)], src, sz);
+ ctx->at += _BYTES_TO_BITS(sz);
+}}
+
+static inline
+int _packet_is_full(struct {prefix}ctx *ctx)
+{{
+ return {prefix}packet_is_full(ctx);
+}}
+
+static
+int _reserve_event_space(struct {prefix}ctx *ctx, uint32_t ev_size)
+{{
+ /* event _cannot_ fit? */
+ if (ev_size > (ctx->packet_size - ctx->off_content)) {{
+ ctx->events_discarded++;
+
+ return 0;
+ }}
+
+ /* packet is full? */
+ if ({prefix}packet_is_full(ctx)) {{
+ /* yes: is back-end full? */
+ if (ctx->cbs.is_backend_full(ctx->data)) {{
+ /* yes: discard event */
+ ctx->events_discarded++;
+
+ return 0;
+ }}
+
+ /* back-end is not full: open new packet */
+ ctx->cbs.open_packet(ctx->data);
+ }}
+
+ /* event fits the current packet? */
+ if (ev_size > (ctx->packet_size - ctx->at)) {{
+ /* no: close packet now */
+ ctx->cbs.close_packet(ctx->data);
+
+ /* is back-end full? */
+ if (ctx->cbs.is_backend_full(ctx->data)) {{
+ /* yes: discard event */
+ ctx->events_discarded++;
+
+ return 0;
+ }}
+
+ /* back-end is not full: open new packet */
+ ctx->cbs.open_packet(ctx->data);
+ assert(ev_size <= (ctx->packet_size - ctx->at));
+ }}
+
+ return 1;
+}}
+
+static
+void _commit_event(struct {prefix}ctx *ctx)
+{{
+ /* is packet full? */
+ if ({prefix}packet_is_full(ctx)) {{
+ /* yes: close it now */
+ ctx->cbs.close_packet(ctx->data);
+ }}
+}}'''
+
+
+_BITFIELD = '''#ifndef _$PREFIX$BITFIELD_H
+#define _$PREFIX$BITFIELD_H
/*
* BabelTrace
#include <stdint.h> /* C99 5.2.4.2 Numerical limits */
#include <limits.h>
-#define $PREFIX$_BYTE_ORDER $ENDIAN_DEF$
+#define $PREFIX$BYTE_ORDER $ENDIAN_DEF$
/* We can't shift a int from 32 bit, >> 32 and << 32 on int is undefined */
-#define _$prefix$_bt_piecewise_rshift(_v, _shift) \\
+#define _$prefix$bt_piecewise_rshift(_v, _shift) \\
({ \\
- typeof(_v) ___v = (_v); \\
- typeof(_shift) ___shift = (_shift); \\
+ __typeof__(_v) ___v = (_v); \\
+ __typeof__(_shift) ___shift = (_shift); \\
unsigned long sb = (___shift) / (sizeof(___v) * CHAR_BIT - 1); \\
unsigned long final = (___shift) % (sizeof(___v) * CHAR_BIT - 1); \\
\\
___v >>= final; \\
})
-#define _$prefix$_bt_piecewise_lshift(_v, _shift) \\
+#define _$prefix$bt_piecewise_lshift(_v, _shift) \\
({ \\
- typeof(_v) ___v = (_v); \\
- typeof(_shift) ___shift = (_shift); \\
+ __typeof__(_v) ___v = (_v); \\
+ __typeof__(_shift) ___shift = (_shift); \\
unsigned long sb = (___shift) / (sizeof(___v) * CHAR_BIT - 1); \\
unsigned long final = (___shift) % (sizeof(___v) * CHAR_BIT - 1); \\
\\
___v <<= final; \\
})
-#define _$prefix$_bt_is_signed_type(type) ((type) -1 < (type) 0)
+#define _$prefix$bt_is_signed_type(type) ((type) -1 < (type) 0)
-#define _$prefix$_bt_unsigned_cast(type, v) \\
+#define _$prefix$bt_unsigned_cast(type, v) \\
({ \\
(sizeof(v) < sizeof(type)) ? \\
((type) (v)) & (~(~(type) 0 << (sizeof(v) * CHAR_BIT))) : \\
})
/*
- * $prefix$_bt_bitfield_write - write integer to a bitfield in native endianness
+ * $prefix$bt_bitfield_write - write integer to a bitfield in native endianness
*
* Save integer to the bitfield, which starts at the "start" bit, has "len"
* bits.
* Also, consecutive bitfields are placed from higher to lower bits.
*/
-#define _$prefix$_bt_bitfield_write_le(_ptr, type, _start, _length, _v) \\
+#define _$prefix$bt_bitfield_write_le(_ptr, type, _start, _length, _v) \\
do { \\
- typeof(_v) __v = (_v); \\
+ __typeof__(_v) __v = (_v); \\
type *__ptr = (void *) (_ptr); \\
unsigned long __start = (_start), __length = (_length); \\
type mask, cmask; \\
\\
/* Trim v high bits */ \\
if (__length < sizeof(__v) * CHAR_BIT) \\
- __v &= ~((~(typeof(__v)) 0) << __length); \\
+ __v &= ~((~(__typeof__(__v)) 0) << __length); \\
\\
/* We can now append v with a simple "or", shift it piece-wise */ \\
this_unit = start_unit; \\
cmask &= ~mask; \\
__ptr[this_unit] &= mask; \\
__ptr[this_unit] |= cmask; \\
- __v = _$prefix$_bt_piecewise_rshift(__v, ts - cshift); \\
+ __v = _$prefix$bt_piecewise_rshift(__v, ts - cshift); \\
__start += ts - cshift; \\
this_unit++; \\
} \\
for (; this_unit < end_unit - 1; this_unit++) { \\
__ptr[this_unit] = (type) __v; \\
- __v = _$prefix$_bt_piecewise_rshift(__v, ts); \\
+ __v = _$prefix$bt_piecewise_rshift(__v, ts); \\
__start += ts; \\
} \\
if (end % ts) { \\
__ptr[this_unit] = (type) __v; \\
} while (0)
-#define _$prefix$_bt_bitfield_write_be(_ptr, type, _start, _length, _v) \\
+#define _$prefix$bt_bitfield_write_be(_ptr, type, _start, _length, _v) \\
do { \\
- typeof(_v) __v = (_v); \\
+ __typeof__(_v) __v = (_v); \\
type *__ptr = (void *) (_ptr); \\
unsigned long __start = (_start), __length = (_length); \\
type mask, cmask; \\
\\
/* Trim v high bits */ \\
if (__length < sizeof(__v) * CHAR_BIT) \\
- __v &= ~((~(typeof(__v)) 0) << __length); \\
+ __v &= ~((~(__typeof__(__v)) 0) << __length); \\
\\
/* We can now append v with a simple "or", shift it piece-wise */ \\
this_unit = end_unit - 1; \\
cmask &= ~mask; \\
__ptr[this_unit] &= mask; \\
__ptr[this_unit] |= cmask; \\
- __v = _$prefix$_bt_piecewise_rshift(__v, cshift); \\
+ __v = _$prefix$bt_piecewise_rshift(__v, cshift); \\
end -= cshift; \\
this_unit--; \\
} \\
for (; (long) this_unit >= (long) start_unit + 1; this_unit--) { \\
__ptr[this_unit] = (type) __v; \\
- __v = _$prefix$_bt_piecewise_rshift(__v, ts); \\
+ __v = _$prefix$bt_piecewise_rshift(__v, ts); \\
end -= ts; \\
} \\
if (__start % ts) { \\
} while (0)
/*
- * $prefix$_bt_bitfield_write - write integer to a bitfield in native endianness
- * $prefix$_bt_bitfield_write_le - write integer to a bitfield in little endian
- * $prefix$_bt_bitfield_write_be - write integer to a bitfield in big endian
+ * $prefix$bt_bitfield_write_le - write integer to a bitfield in little endian
+ * $prefix$bt_bitfield_write_be - write integer to a bitfield in big endian
*/
-#if ($PREFIX$_BYTE_ORDER == LITTLE_ENDIAN)
+#if ($PREFIX$BYTE_ORDER == LITTLE_ENDIAN)
-#define $prefix$_bt_bitfield_write(ptr, type, _start, _length, _v) \\
- _$prefix$_bt_bitfield_write_le(ptr, type, _start, _length, _v)
+#define $prefix$bt_bitfield_write_le(ptr, type, _start, _length, _v) \\
+ _$prefix$bt_bitfield_write_le(ptr, type, _start, _length, _v)
-#define $prefix$_bt_bitfield_write_le(ptr, type, _start, _length, _v) \\
- _$prefix$_bt_bitfield_write_le(ptr, type, _start, _length, _v)
+#define $prefix$bt_bitfield_write_be(ptr, type, _start, _length, _v) \\
+ _$prefix$bt_bitfield_write_be(ptr, unsigned char, _start, _length, _v)
-#define $prefix$_bt_bitfield_write_be(ptr, type, _start, _length, _v) \\
- _$prefix$_bt_bitfield_write_be(ptr, unsigned char, _start, _length, _v)
+#elif ($PREFIX$BYTE_ORDER == BIG_ENDIAN)
-#elif ($PREFIX$_BYTE_ORDER == BIG_ENDIAN)
+#define $prefix$bt_bitfield_write_le(ptr, type, _start, _length, _v) \\
+ _$prefix$bt_bitfield_write_le(ptr, unsigned char, _start, _length, _v)
-#define $prefix$_bt_bitfield_write(ptr, type, _start, _length, _v) \\
- _$prefix$_bt_bitfield_write_be(ptr, type, _start, _length, _v)
+#define $prefix$bt_bitfield_write_be(ptr, type, _start, _length, _v) \\
+ _$prefix$bt_bitfield_write_be(ptr, type, _start, _length, _v)
-#define $prefix$_bt_bitfield_write_le(ptr, type, _start, _length, _v) \\
- _$prefix$_bt_bitfield_write_le(ptr, unsigned char, _start, _length, _v)
-
-#define $prefix$_bt_bitfield_write_be(ptr, type, _start, _length, _v) \\
- _$prefix$_bt_bitfield_write_be(ptr, type, _start, _length, _v)
-
-#else /* ($PREFIX$_BYTE_ORDER == PDP_ENDIAN) */
+#else /* ($PREFIX$BYTE_ORDER == PDP_ENDIAN) */
#error "Byte order not supported"
#endif
-static
-void $prefix$_write_integer_signed_le(void *ptr, uint32_t at, uint32_t len, int64_t v)
-{
- $prefix$_bt_bitfield_write_le(ptr, uint8_t, at, len, v);
-}
-
-static
-void $prefix$_write_integer_unsigned_le(void *ptr, uint32_t at, uint32_t len, uint64_t v)
-{
- $prefix$_bt_bitfield_write_le(ptr, uint8_t, at, len, v);
-}
-
-static
-void $prefix$_write_integer_signed_be(void *ptr, uint32_t at, uint32_t len, int64_t v)
-{
- $prefix$_bt_bitfield_write_be(ptr, uint8_t, at, len, v);
-}
-
-static
-void $prefix$_write_integer_unsigned_be(void *ptr, uint32_t at, uint32_t len, uint64_t v)
-{
- $prefix$_bt_bitfield_write_be(ptr, uint8_t, at, len, v);
-}
-
-#endif /* _$PREFIX$_BITFIELD_H */
-"""
+#endif /* _$PREFIX$BITFIELD_H */
+'''
--- /dev/null
+# The MIT License (MIT)
+#
+# Copyright (c) 2015 Philippe Proulx <pproulx@efficios.com>
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+
+from barectf import metadata
+from barectf import codegen
+import datetime
+import barectf
+
+
+_bo_to_string_map = {
+ metadata.ByteOrder.LE: 'le',
+ metadata.ByteOrder.BE: 'be',
+}
+
+
+_encoding_to_string_map = {
+ metadata.Encoding.NONE: 'none',
+ metadata.Encoding.ASCII: 'ASCII',
+ metadata.Encoding.UTF8: 'UTF8',
+}
+
+
+def _bo_to_string(bo):
+ return _bo_to_string_map[bo]
+
+
+def _encoding_to_string(encoding):
+ return _encoding_to_string_map[encoding]
+
+
+def _bool_to_string(b):
+ return 'true' if b else 'false'
+
+
+def _gen_integer(t, cg):
+ cg.add_line('integer {')
+ cg.indent()
+ cg.add_line('size = {};'.format(t.size))
+ cg.add_line('align = {};'.format(t.align))
+ cg.add_line('signed = {};'.format(_bool_to_string(t.signed)))
+ cg.add_line('byte_order = {};'.format(_bo_to_string(t.byte_order)))
+ cg.add_line('base = {};'.format(t.base))
+ cg.add_line('encoding = {};'.format(_encoding_to_string(t.encoding)))
+
+ if t.property_mappings:
+ clock_name = t.property_mappings[0].object.name
+ cg.add_line('map = clock.{}.value;'.format(clock_name))
+
+ cg.unindent()
+ cg.add_line('}')
+
+
+def _gen_float(t, cg):
+ cg.add_line('floating_point {')
+ cg.indent()
+ cg.add_line('exp_dig = {};'.format(t.exp_size))
+ cg.add_line('mant_dig = {};'.format(t.mant_size))
+ cg.add_line('align = {};'.format(t.align))
+ cg.add_line('byte_order = {};'.format(_bo_to_string(t.byte_order)))
+ cg.unindent()
+ cg.add_line('}')
+
+
+def _gen_enum(t, cg):
+ cg.add_line('enum : ')
+ cg.add_glue()
+ _gen_type(t.value_type, cg)
+ cg.append_to_last_line(' {')
+ cg.indent()
+
+ for label, (mn, mx) in t.members.items():
+ if mn == mx:
+ rg = str(mn)
+ else:
+ rg = '{} ... {}'.format(mn, mx)
+
+ line = '"{}" = {},'.format(label, rg)
+ cg.add_line(line)
+
+ cg.unindent()
+ cg.add_line('}')
+
+
+def _gen_string(t, cg):
+ cg.add_line('string {')
+ cg.indent()
+ cg.add_line('encoding = {};'.format(_encoding_to_string(t.encoding)))
+ cg.unindent()
+ cg.add_line('}')
+
+
+def _find_deepest_array_element_type(t):
+ if type(t) is metadata.Array:
+ return _find_deepest_array_element_type(t.element_type)
+
+ return t
+
+
+def _fill_array_lengths(t, lengths):
+ if type(t) is metadata.Array:
+ lengths.append(t.length)
+ _fill_array_lengths(t.element_type, lengths)
+
+
+def _gen_struct_variant_entry(name, t, cg):
+ real_t = _find_deepest_array_element_type(t)
+ _gen_type(real_t, cg)
+ cg.append_to_last_line(' {}'.format(name))
+
+ # array
+ lengths = []
+ _fill_array_lengths(t, lengths)
+
+ if lengths:
+ for length in reversed(lengths):
+ cg.append_to_last_line('[{}]'.format(length))
+
+ cg.append_to_last_line(';')
+
+
+def _gen_struct(t, cg):
+ cg.add_line('struct {')
+ cg.indent()
+
+ for field_name, field_type in t.fields.items():
+ _gen_struct_variant_entry(field_name, field_type, cg)
+
+ cg.unindent()
+
+ if not t.fields:
+ cg.add_glue()
+
+ cg.add_line('}} align({})'.format(t.min_align))
+
+
+def _gen_variant(t, cg):
+ cg.add_line('variant <{}> {{'.format(t.tag))
+ cg.indent()
+
+ for type_name, type_type in t.types.items():
+ _gen_struct_variant_entry(type_name, type_type, cg)
+
+ cg.unindent()
+
+ if not t.types:
+ cg.add_glue()
+
+ cg.add_line('}')
+
+
+_type_to_gen_type_func = {
+ metadata.Integer: _gen_integer,
+ metadata.FloatingPoint: _gen_float,
+ metadata.Enum: _gen_enum,
+ metadata.String: _gen_string,
+ metadata.Struct: _gen_struct,
+ metadata.Variant: _gen_variant,
+}
+
+
+def _gen_type(t, cg):
+ _type_to_gen_type_func[type(t)](t, cg)
+
+
+def _gen_entity(name, t, cg):
+ cg.add_line('{} := '.format(name))
+ cg.add_glue()
+ _gen_type(t, cg)
+ cg.append_to_last_line(';')
+
+
+def _gen_start_block(name, cg):
+ cg.add_line('{} {{'.format(name))
+ cg.indent()
+
+
+def _gen_end_block(cg):
+ cg.unindent()
+ cg.add_line('};')
+ cg.add_empty_line()
+
+
+def _gen_trace_block(meta, cg):
+ trace = meta.trace
+
+ _gen_start_block('trace', cg)
+ cg.add_line('major = 1;')
+ cg.add_line('minor = 8;')
+ line = 'byte_order = {};'.format(_bo_to_string(trace.byte_order))
+ cg.add_line(line)
+
+ if trace.uuid is not None:
+ line = 'uuid = "{}";'.format(trace.uuid)
+ cg.add_line(line)
+
+ if trace.packet_header_type is not None:
+ _gen_entity('packet.header', trace.packet_header_type, cg)
+
+ _gen_end_block(cg)
+
+
+def _escape_literal_string(s):
+ esc = s.replace('\\', '\\\\')
+ esc = esc.replace('\n', '\\n')
+ esc = esc.replace('\r', '\\r')
+ esc = esc.replace('\t', '\\t')
+ esc = esc.replace('"', '\\"')
+
+ return esc
+
+
+def _gen_env_block(meta, cg):
+ env = meta.env
+
+ if not env:
+ return
+
+ _gen_start_block('env', cg)
+
+ for name, value in env.items():
+ if type(value) is int:
+ value_string = str(value)
+ else:
+ value_string = '"{}"'.format(_escape_literal_string(value))
+
+ cg.add_line('{} = {};'.format(name, value_string))
+
+ _gen_end_block(cg)
+
+
+def _gen_clock_block(clock, cg):
+ _gen_start_block('clock', cg)
+ cg.add_line('name = {};'.format(clock.name))
+
+ if clock.description is not None:
+ desc = _escape_literal_string(clock.description)
+ cg.add_line('description = "{}";'.format(desc))
+
+ if clock.uuid is not None:
+ cg.add_line('uuid = "{}";'.format(clock.uuid))
+
+ cg.add_line('freq = {};'.format(clock.freq))
+ cg.add_line('offset_s = {};'.format(clock.offset_seconds))
+ cg.add_line('offset = {};'.format(clock.offset_cycles))
+ cg.add_line('precision = {};'.format(clock.error_cycles))
+ cg.add_line('absolute = {};'.format(_bool_to_string(clock.absolute)))
+ _gen_end_block(cg)
+
+
+def _gen_clock_blocks(meta, cg):
+ clocks = meta.clocks
+
+ for clock in clocks.values():
+ _gen_clock_block(clock, cg)
+
+
+def _gen_stream_block(stream, cg):
+ cg.add_cc_line(stream.name.replace('/', ''))
+ _gen_start_block('stream', cg)
+ cg.add_line('id = {};'.format(stream.id))
+
+ if stream.packet_context_type is not None:
+ _gen_entity('packet.context', stream.packet_context_type, cg)
+
+ if stream.event_header_type is not None:
+ _gen_entity('event.header', stream.event_header_type, cg)
+
+ if stream.event_context_type is not None:
+ _gen_entity('event.context', stream.event_context_type, cg)
+
+ _gen_end_block(cg)
+
+
+def _gen_event_block(stream, ev, cg):
+ _gen_start_block('event', cg)
+ cg.add_line('name = "{}";'.format(ev.name))
+ cg.add_line('id = {};'.format(ev.id))
+ cg.add_line('stream_id = {};'.format(stream.id))
+ cg.append_cc_to_last_line(stream.name.replace('/', ''))
+
+ if ev.log_level is not None:
+ cg.add_line('loglevel = {};'.format(ev.log_level))
+
+ if ev.context_type is not None:
+ _gen_entity('context', ev.context_type, cg)
+
+ if ev.payload_type is not None:
+ _gen_entity('fields', ev.payload_type, cg)
+
+ _gen_end_block(cg)
+
+
+def _gen_streams_events_blocks(meta, cg):
+ for stream in meta.streams.values():
+ _gen_stream_block(stream, cg)
+
+ for ev in stream.events.values():
+ _gen_event_block(stream, ev, cg)
+
+
+def from_metadata(meta):
+ cg = codegen.CodeGenerator('\t')
+
+ # version/magic
+ cg.add_line('/* CTF 1.8 */')
+ cg.add_empty_line()
+ cg.add_line('/*')
+ v = barectf.__version__
+ line = ' * The following TSDL code was generated by barectf v{}'.format(v)
+ cg.add_line(line)
+ now = datetime.datetime.now()
+ line = ' * on {}.'.format(now)
+ cg.add_line(line)
+ cg.add_line(' *')
+ cg.add_line(' * For more details, see <https://github.com/efficios/barectf>.')
+ cg.add_line(' */')
+ cg.add_empty_line()
+
+ # trace block
+ _gen_trace_block(meta, cg)
+
+ # environment
+ _gen_env_block(meta, cg)
+
+ # clocks
+ _gen_clock_blocks(meta, cg)
+
+ # streams and contained events
+ _gen_streams_events_blocks(meta, cg)
+
+ return cg.code
--- /dev/null
+BARECTF ?= barectf
+RM = rm -rf
+MKDIR = mkdir
+
+PLATFORM_DIR = ../../../platforms/linux-fs
+CFLAGS = -O2 -std=gnu99 -I$(PLATFORM_DIR) -I.
+
+TARGET = linux-fs-simple
+OBJS = $(TARGET).o barectf.o barectf-platform-linux-fs.o
+
+.PHONY: all view clean
+
+all: $(TARGET)
+
+ctf:
+ $(MKDIR) ctf
+
+$(TARGET): $(OBJS)
+ $(CC) -o $@ $^
+
+ctf/metadata barectf-bitfield.h barectf.h barectf.c: config.yaml ctf
+ barectf $< -m ctf
+
+barectf.o: barectf.c
+ $(CC) $(CFLAGS) -c $<
+
+barectf-platform-linux-fs.o: $(PLATFORM_DIR)/barectf-platform-linux-fs.c
+ $(CC) $(CFLAGS) -c $<
+
+$(TARGET).o: $(TARGET).c barectf.h barectf-bitfield.h
+ $(CC) $(CFLAGS) -c $<
+
+clean:
+ $(RM) $(TARGET) $(OBJS) ctf
+ $(RM) barectf.h barectf-bitfield.h barectf.c
--- /dev/null
+# linux-fs-simple example
+
+This very simple example shows how to use the barectf
+[linux-fs platform](../../../platforms/linux-fs).
+
+
+## Building
+
+Make sure you have the latest version of barectf installed.
+
+Build this example:
+
+ make
+
+
+## Running
+
+Run this example:
+
+ ./linux-fs-simple
+
+The complete CTF trace is written to the `ctf` directory.
+
+You may run the example with any arguments; they will be recorded,
+as string fields in the events of the binary stream, e.g.:
+
+ ./linux-fs-simple this argument and this one will be recorded
--- /dev/null
+version: '2.0'
+metadata:
+ type-aliases:
+ uint8:
+ class: integer
+ size: 8
+ uint16:
+ class: integer
+ size: 16
+ uint32:
+ class: integer
+ size: 32
+ uint64:
+ class: integer
+ size: 64
+ int8:
+ inherit: uint8
+ signed: true
+ int16:
+ inherit: int8
+ size: 16
+ int32:
+ inherit: int8
+ size: 32
+ int64:
+ inherit: int8
+ size: 64
+ float:
+ class: floating-point
+ size:
+ exp: 8
+ mant: 24
+ align: 32
+ double:
+ class: floating-point
+ size:
+ exp: 11
+ mant: 53
+ align: 64
+ byte: uint8
+ uuid:
+ class: array
+ length: 16
+ element-type: byte
+ clock-int:
+ inherit: uint64
+ property-mappings:
+ - type: clock
+ name: default
+ property: value
+ state:
+ class: enum
+ value-type: uint8
+ members:
+ - NEW
+ - TERMINATED
+ - READY
+ - RUNNING
+ - WAITING
+ log-levels:
+ EMERG: 0
+ ALERT: 1
+ CRIT: 2
+ ERR: 3
+ WARNING: 4
+ NOTICE: 5
+ INFO: 6
+ DEBUG_SYSTEM: 7
+ DEBUG_PROGRAM: 8
+ DEBUG_PROCESS: 9
+ DEBUG_MODULE: 10
+ DEBUG_UNIT: 11
+ DEBUG_FUNCTION: 12
+ DEBUG_LINE: 13
+ DEBUG: 14
+ clocks:
+ default:
+ freq: 1000000000
+ offset:
+ seconds: 1434072888
+ return-ctype: uint64_t
+ trace:
+ byte-order: le
+ uuid: auto
+ packet-header-type:
+ class: struct
+ min-align: 8
+ fields:
+ magic: uint32
+ uuid: uuid
+ stream_id: uint8
+ streams:
+ default:
+ packet-context-type:
+ class: struct
+ fields:
+ timestamp_begin: clock-int
+ timestamp_end: clock-int
+ packet_size: uint32
+ content_size: uint32
+ events_discarded: uint32
+ event-header-type:
+ class: struct
+ fields:
+ timestamp: clock-int
+ id: uint16
+ events:
+ simple_uint32:
+ payload-type:
+ class: struct
+ fields:
+ value: uint32
+ simple_int16:
+ payload-type:
+ class: struct
+ fields:
+ value: int16
+ simple_float:
+ payload-type:
+ class: struct
+ fields:
+ value: float
+ simple_string:
+ payload-type:
+ class: struct
+ fields:
+ value:
+ class: string
+ simple_enum:
+ payload-type:
+ class: struct
+ fields:
+ value: state
+ a_few_fields:
+ payload-type:
+ class: struct
+ fields:
+ int32: int32
+ uint16: uint16
+ dbl: double
+ str:
+ class: string
+ state: state
+ bit_packed_integers:
+ payload-type:
+ class: struct
+ min-align: 8
+ fields:
+ uint1:
+ inherit: uint8
+ size: 1
+ align: 1
+ int1:
+ inherit: int8
+ size: 1
+ align: 1
+ uint2:
+ inherit: uint8
+ size: 2
+ align: 1
+ int3:
+ inherit: int8
+ size: 3
+ align: 1
+ uint4:
+ inherit: uint8
+ size: 4
+ align: 1
+ int5:
+ inherit: int8
+ size: 5
+ align: 1
+ uint6:
+ inherit: uint8
+ size: 6
+ align: 1
+ int7:
+ inherit: int8
+ size: 7
+ align: 1
+ uint8:
+ inherit: uint8
+ align: 1
--- /dev/null
+#include <stdio.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <time.h>
+#include <barectf-platform-linux-fs.h>
+#include <barectf.h>
+
+enum state_t {
+ NEW,
+ TERMINATED,
+ READY,
+ RUNNING,
+ WAITING,
+};
+
+static void trace_stuff(struct barectf_default_ctx *ctx, int argc,
+ char *argv[])
+{
+ int i;
+ const char *str;
+
+ /* record 40000 events */
+ for (i = 0; i < 5000; ++i) {
+ barectf_default_trace_simple_uint32(ctx, i * 1500);
+ barectf_default_trace_simple_int16(ctx, -i * 2);
+ barectf_default_trace_simple_float(ctx, (float) i / 1.23);
+
+ if (argc > 0) {
+ str = argv[i % argc];
+ } else {
+ str = "hello there!";
+ }
+
+ barectf_default_trace_simple_string(ctx, str);
+ barectf_default_trace_simple_enum(ctx, RUNNING);
+ barectf_default_trace_a_few_fields(ctx, -1, 301, -3.14159,
+ str, NEW);
+ barectf_default_trace_bit_packed_integers(ctx, 1, -1, 3,
+ -2, 2, 7, 23,
+ -55, 232);
+ barectf_default_trace_simple_enum(ctx, TERMINATED);
+ }
+}
+
+int main(int argc, char *argv[])
+{
+ struct barectf_platform_linux_fs_ctx *platform_ctx;
+
+ /* initialize platform */
+ platform_ctx = barectf_platform_linux_fs_init(512, "ctf", 1, 2, 7);
+
+ if (!platform_ctx) {
+ fprintf(stderr, "Error: could not initialize platform\n");
+ return 1;
+ }
+
+ /* trace stuff (will create/write packets as it runs) */
+ trace_stuff(barectf_platform_linux_fs_get_barectf_ctx(platform_ctx),
+ argc, argv);
+
+ /* finalize platform */
+ barectf_platform_linux_fs_fini(platform_ctx);
+
+ return 0;
+}
--- /dev/null
+CROSS_COMPILE ?= e-
+
+BARECTF ?= barectf
+RM = rm -rf
+MKDIR = mkdir
+CC=$(CROSS_COMPILE)gcc
+LD=$(CC)
+OBJCOPY=$(CROSS_COMPILE)objcopy
+
+ESDK=$(EPIPHANY_HOME)
+ELDF=$(ESDK)/bsps/current/fast.ldf
+PLATFORM_DIR = ../../../platforms/parallella
+CFLAGS = -O2 -std=c99 -I$(PLATFORM_DIR) -I.
+LDFLAGS = -T $(ELDF) -le-lib
+
+TARGET = parallella
+OBJS = $(TARGET).o barectf.o barectf-platform-parallella.o
+
+.PHONY: all view clean
+
+all: $(TARGET).srec
+
+ctf:
+ $(MKDIR) ctf
+
+$(TARGET): $(OBJS)
+ $(LD) -o $@ $^ $(LDFLAGS)
+
+$(TARGET).srec: $(TARGET)
+ $(OBJCOPY) --srec-forceS3 --output-target srec $< $@
+
+ctf/metadata barectf-bitfield.h barectf.h barectf.c: config.yaml ctf
+ barectf $< -m ctf
+
+barectf.o: barectf.c barectf.h barectf-bitfield.h
+ $(CC) $(CFLAGS) -c $<
+
+barectf-platform-parallella.o: $(PLATFORM_DIR)/barectf-platform-parallella.c
+ $(CC) $(CFLAGS) -c $<
+
+$(TARGET).o: $(TARGET).c barectf.h barectf-bitfield.h
+ $(CC) $(CFLAGS) -c $<
+
+clean:
+ $(RM) $(TARGET) $(TARGET).srec $(OBJS) ctf
+ $(RM) barectf.h barectf-bitfield.h barectf.c
--- /dev/null
+# Parallella example
+
+This example shows how to use the barectf
+[Parallella platform](../../../platforms/parallella).
+
+
+## Building
+
+Make sure you have the latest version of barectf installed.
+
+Build this example:
+
+ make
+
+
+## Running
+
+Make sure the consumer application is running first
+(see the Parallella platform's
+[`README.md`](../../../platforms/parallella/README.md) file):
+
+ e-reset
+ ./consumer /path/to/the/ctf/directory/here
+
+Load and start this example on all 16 cores:
+
+ e-loader -s parallella.srec 0 0 4 4
+
+When you've had enough, kill the consumer with `SIGINT` (Ctrl+C) and
+reset the platform with `e-reset` to stop the Epiphany cores.
+
+The complete CTF trace is written to the `ctf` directory.
--- /dev/null
+version: '2.0'
+metadata:
+ type-aliases:
+ uint8:
+ class: integer
+ size: 8
+ uint6:
+ class: integer
+ size: 6
+ uint16:
+ class: integer
+ size: 16
+ uint32:
+ class: integer
+ size: 32
+ uint64:
+ class: integer
+ size: 64
+ int8:
+ inherit: uint8
+ signed: true
+ int16:
+ inherit: int8
+ size: 16
+ int32:
+ inherit: int8
+ size: 32
+ int64:
+ inherit: int8
+ size: 64
+ float:
+ class: floating-point
+ size:
+ exp: 8
+ mant: 24
+ align: 32
+ double:
+ class: floating-point
+ size:
+ exp: 11
+ mant: 53
+ align: 64
+ byte: uint8
+ uuid:
+ class: array
+ length: 16
+ element-type: byte
+ clock_int:
+ inherit: uint64
+ property-mappings:
+ - type: clock
+ name: default
+ property: value
+ state:
+ class: enum
+ value-type: uint8
+ members:
+ - NEW
+ - TERMINATED
+ - READY
+ - RUNNING
+ - WAITING
+ str:
+ class: string
+ log-levels:
+ EMERG: 0
+ ALERT: 1
+ CRIT: 2
+ ERR: 3
+ WARNING: 4
+ NOTICE: 5
+ INFO: 6
+ DEBUG_SYSTEM: 7
+ DEBUG_PROGRAM: 8
+ DEBUG_PROCESS: 9
+ DEBUG_MODULE: 10
+ DEBUG_UNIT: 11
+ DEBUG_FUNCTION: 12
+ DEBUG_LINE: 13
+ DEBUG: 14
+ clocks:
+ default:
+ freq: 1000000000
+ offset:
+ seconds: 1434580186
+ return-ctype: uint64_t
+ trace:
+ byte-order: le
+ uuid: auto
+ packet-header-type:
+ class: struct
+ min-align: 8
+ fields:
+ magic: uint32
+ uuid: uuid
+ stream_id: uint8
+ streams:
+ default:
+ packet-context-type:
+ class: struct
+ fields:
+ timestamp_begin: clock_int
+ timestamp_end: clock_int
+ packet_size: uint32
+ content_size: uint32
+ events_discarded: uint32
+ row: uint6
+ col: uint6
+ event-header-type:
+ class: struct
+ fields:
+ timestamp: clock_int
+ id: uint16
+ events:
+ bit_packed_integers:
+ payload-type:
+ class: struct
+ min-align: 8
+ fields:
+ uint1:
+ inherit: uint8
+ size: 1
+ align: 1
+ int1:
+ inherit: int8
+ size: 1
+ align: 1
+ uint2:
+ inherit: uint8
+ size: 2
+ align: 1
+ int3:
+ inherit: int8
+ size: 3
+ align: 1
+ uint4:
+ inherit: uint8
+ size: 4
+ align: 1
+ int5:
+ inherit: int8
+ size: 5
+ align: 1
+ string_and_float:
+ payload-type:
+ class: struct
+ fields:
+ the_string: str
+ the_float: float
--- /dev/null
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <e_lib.h>
+
+#include "barectf.h"
+#include "barectf-platform-parallella.h"
+
+#define WAND_BIT (1 << 3)
+
+static void __attribute__((interrupt)) wand_trace_isr(int signum)
+{
+ (void) signum;
+}
+
+static void sync(void)
+{
+ uint32_t irq_state;
+
+ /* enable WAND interrupt */
+ e_irq_global_mask(E_FALSE);
+ e_irq_attach(WAND_BIT, wand_trace_isr);
+ e_irq_mask(WAND_BIT, E_FALSE);
+
+ /* WAND + IDLE */
+ __asm__ __volatile__("wand");
+ __asm__ __volatile__("idle");
+
+ /* acknowledge interrupt */
+ irq_state = e_reg_read(E_REG_STATUS);
+ irq_state &= ~WAND_BIT;
+ e_reg_write(E_REG_STATUS, irq_state);
+}
+
+int main(void)
+{
+ struct barectf_default_ctx *barectf_ctx;
+ static const char *strings[] = {
+ "calories",
+ "fat",
+ "carbohydrate",
+ "protein",
+ };
+ uint8_t at = 0;
+
+ /* initialize tracing platform */
+ if (tracing_init()) {
+ /* init. error: do not trace */
+ return 1;
+ }
+
+ barectf_ctx = tracing_get_barectf_ctx();
+
+ /* synchronize all cores */
+ sync();
+
+ /* reset tracing clock value */
+ tracing_reset_clock();
+
+ /* trace */
+ for (;;) {
+ int8_t b = (int8_t) at;
+ size_t wait_count;
+
+ barectf_default_trace_bit_packed_integers(barectf_ctx,
+ at, -b, at * 2, -b * 2, at * 3, -b * 3);
+
+ for (wait_count = 0; wait_count < 1000; ++wait_count) {
+ __asm__ __volatile__("nop");
+ }
+
+ barectf_default_trace_string_and_float(barectf_ctx,
+ strings[at & 3], 0.1234 * (float) at);
+ at++;
+
+#ifdef LOW_THROUGHPUT
+ for (wait_count = 0; wait_count < 25000000; ++wait_count) {
+ __asm__ __volatile__("nop");
+ }
+#endif /* LOW_THROUGHPUT */
+ }
+
+ /* never executed here, but this is where this would normally go */
+ tracing_fini();
+
+ return 0;
+}
+++ /dev/null
-BARECTF ?= barectf
-RM = rm -rf
-
-CFLAGS = -O2
-
-TARGET = simple
-OBJS = $(TARGET).o barectf.o
-
-.PHONY: all view clean
-
-all: $(TARGET)
-
-$(TARGET): $(OBJS)
- $(CC) -o $@ $^
-
-barectf.h barectf.c: ctf/metadata
- barectf $<
-
-barectf.o: barectf.c
- $(CC) $(CFLAGS) -Wno-strict-aliasing -Wno-unused-variable -c $<
-
-$(TARGET).o: $(TARGET).c barectf.h
- $(CC) $(CFLAGS) -c $<
-
-clean:
- $(RM) $(TARGET) $(OBJS) ctf/stream*
- $(RM) barectf.h barectf_bitfield.h barectf.c
+++ /dev/null
-/* CTF 1.8 */
-
-typealias integer {size = 8; align = 8;} := uint8_t;
-typealias integer {size = 16; align = 16;} := uint16_t;
-typealias integer {size = 32; align = 32;} := uint32_t;
-typealias integer {size = 64; align = 64;} := uint64_t;
-typealias integer {size = 8; align = 8; signed = true;} := int8_t;
-typealias integer {size = 16; align = 16; signed = true;} := int16_t;
-typealias integer {size = 32; align = 32; signed = true;} := int32_t;
-typealias integer {size = 64; align = 64; signed = true;} := int64_t;
-
-typealias floating_point {
- exp_dig = 8;
- mant_dig = 24;
- align = 32;
-} := float;
-
-typealias floating_point {
- exp_dig = 11;
- mant_dig = 53;
- align = 64;
-} := double;
-
-trace {
- major = 1;
- minor = 8;
- byte_order = le;
-
- packet.header := struct {
- uint32_t magic;
- uint32_t stream_id;
- };
-};
-
-env {
- domain = "bare";
- tracer_name = "barectf";
- tracer_major = 0;
- tracer_minor = 1;
- tracer_patchlevel = 0;
-};
-
-clock {
- name = my_clock;
- freq = 1000000000;
- offset = 0;
-};
-
-typealias integer {
- size = 64;
- map = clock.my_clock.value;
-} := my_clock_int_t;
-
-stream {
- id = 0;
-
- packet.context := struct {
- my_clock_int_t timestamp_begin;
- my_clock_int_t timestamp_end;
- uint64_t packet_size;
- uint64_t content_size;
- uint32_t events_discarded;
- };
-
- event.header := struct {
- uint32_t id;
- my_clock_int_t timestamp;
- };
-};
-
-/* an event with a simple 32-bit unsigned integer field */
-event {
- name = "simple_uint32";
- id = 0;
- stream_id = 0;
-
- fields := struct {
- uint32_t _value;
- };
-};
-
-/* an event with a simple 16-bit signed integer field */
-event {
- name = "simple_int16";
- id = 1;
- stream_id = 0;
-
- fields := struct {
- int16_t _value;
- };
-};
-
-/*
- * An event with a simple IEEE 754 (see type alias above) single-precision
- * floating point number.
- */
-event {
- name = "simple_float";
- id = 2;
- stream_id = 0;
-
- fields := struct {
- float _value;
- };
-};
-
-/* an event with a simple NULL-terminated string field */
-event {
- name = "simple_string";
- id = 3;
- stream_id = 0;
-
- fields := struct {
- string _value;
- };
-};
-
-/* custom enumeration, of which the key is a 8-bit unsigned integer */
-typealias enum : uint8_t {
- NEW, /* 0 */
- TERMINATED, /* 1 */
- READY, /* 2 */
- RUNNING, /* 3 */
- WAITING, /* 4 */
-} := state_t;
-
-/* an event with a simple enumeration (see type alias above) field */
-event {
- name = "simple_enum";
- id = 4;
- stream_id = 0;
-
- fields := struct {
- state_t _state;
- };
-};
-
-/* an event with a few fields */
-event {
- name = "a_few_fields";
- id = 5;
- stream_id = 0;
-
- fields := struct {
- int32_t _int32;
- uint16_t _uint16;
- double _double;
- string _string;
- state_t _state;
- };
-};
-
-/* an event with bit-packed integer fields */
-event {
- name = "bit_packed_integers";
- id = 6;
- stream_id = 0;
-
- fields := struct {
- integer {size = 1;} _uint1;
- integer {size = 1; signed = true;} _int1;
- integer {size = 2;} _uint2;
- integer {size = 3; signed = true;} _int3;
- integer {size = 4;} _uint4;
- integer {size = 5; signed = true;} _int5;
- integer {size = 6;} _uint6;
- integer {size = 7; signed = true;} _int7;
- integer {size = 8; align = 1;} _uint8;
- };
-};
+++ /dev/null
-#include <stdio.h>
-#include <stdint.h>
-#include <stdlib.h>
-#include <time.h>
-
-#include "barectf.h"
-
-static uint64_t get_clock(void* data)
-{
- struct timespec ts;
-
- clock_gettime(CLOCK_MONOTONIC, &ts);
-
- return ts.tv_sec * 1000000000UL + ts.tv_nsec;
-}
-
-enum state_t {
- NEW,
- TERMINATED,
- READY,
- RUNNING,
- WAITING,
-};
-
-static void simple(uint8_t* buf, size_t sz)
-{
- /* initialize barectf context */
- struct barectf_ctx ctx;
- struct barectf_ctx* pctx = &ctx;
-
- barectf_init(pctx, buf, sz, get_clock, NULL);
-
- /* open packet */
- barectf_open_packet(pctx);
-
- /* record events */
- barectf_trace_simple_uint32(pctx, 20150101);
- barectf_trace_simple_int16(pctx, -2999);
- barectf_trace_simple_float(pctx, 23.57);
- barectf_trace_simple_string(pctx, "Hello, World!");
- barectf_trace_simple_enum(pctx, RUNNING);
- barectf_trace_a_few_fields(pctx, -1, 301, -3.14159, "Hello again!", NEW);
- barectf_trace_bit_packed_integers(pctx, 1, -1, 3, -2, 2, 7, 23, -55, 232);
-
- /* close packet with 3 discarded events */
- barectf_close_packet(pctx, 3);
-}
-
-static void write_packet(const char* filename, const uint8_t* buf, size_t sz)
-{
- FILE* fh = fopen(filename, "wb");
-
- if (!fh) {
- return;
- }
-
- fwrite(buf, 1, sz, fh);
- fclose(fh);
-}
-
-int main(void)
-{
- puts("simple barectf example!");
-
- const size_t buf_sz = 8192;
-
- uint8_t* buf = malloc(buf_sz);
-
- if (!buf) {
- return 1;
- }
-
- simple(buf, buf_sz);
- write_packet("ctf/stream_0", buf, buf_sz);
- free(buf);
-
- return 0;
-}
--- /dev/null
+# barectf linux-fs platform
+
+This is a very simple barectf platform, written for demonstration purposes,
+which writes the binary packets to a stream file on the file system.
+
+This platform can also simulate a full back-end from time to time,
+with a configurable ratio.
+
+
+## Requirements
+
+ * barectf prefix: `barectf_`
+ * No custom trace packet header fields
+ * A single stream named `default`, with no custom stream packet context
+ fields
+ * One clock named `default` returning `uint64_t`.
+
+
+## Files
+
+ * `barectf-platform-linux-fs.h`: include this in your application
+ * `barectf-platform-linux-fs.c`: link your application with this
+
+
+## Using
+
+See [`barectf-platform-linux-fs.h`](barectf-platform-linux-fs.h).
--- /dev/null
+/*
+ * barectf linux-fs platform
+ *
+ * Copyright (c) 2015 EfficiOS Inc. and Linux Foundation
+ * Copyright (c) 2015 Philippe Proulx <pproulx@efficios.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <assert.h>
+#include <barectf.h>
+#include <time.h>
+
+#include "barectf-platform-linux-fs.h"
+
+struct barectf_platform_linux_fs_ctx {
+ struct barectf_default_ctx ctx;
+ FILE *fh;
+ int simulate_full_backend;
+ unsigned int full_backend_rand_lt;
+ unsigned int full_backend_rand_max;
+};
+
+static uint64_t get_clock(void* data)
+{
+ struct timespec ts;
+
+ clock_gettime(CLOCK_MONOTONIC, &ts);
+
+ return ts.tv_sec * 1000000000ULL + ts.tv_nsec;
+}
+
+static void write_packet(struct barectf_platform_linux_fs_ctx *ctx)
+{
+ size_t nmemb = fwrite(barectf_packet_buf(&ctx->ctx),
+ barectf_packet_buf_size(&ctx->ctx), 1, ctx->fh);
+ assert(nmemb == 1);
+}
+
+static int is_backend_full(void *data)
+{
+ struct barectf_platform_linux_fs_ctx *ctx = data;
+
+ if (ctx->simulate_full_backend) {
+ if (rand() % ctx->full_backend_rand_max <
+ ctx->full_backend_rand_lt) {
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+static void open_packet(void *data)
+{
+ struct barectf_platform_linux_fs_ctx *ctx = data;
+
+ barectf_default_open_packet(&ctx->ctx);
+}
+
+static void close_packet(void *data)
+{
+ struct barectf_platform_linux_fs_ctx *ctx = data;
+
+ /* close packet now */
+ barectf_default_close_packet(&ctx->ctx);
+
+ /* write packet to file */
+ write_packet(ctx);
+}
+
+struct barectf_platform_linux_fs_ctx *barectf_platform_linux_fs_init(
+ unsigned int buf_size, const char *trace_dir, int simulate_full_backend,
+ unsigned int full_backend_rand_lt, unsigned int full_backend_rand_max)
+{
+ char stream_path[256];
+ uint8_t *buf;
+ struct barectf_platform_linux_fs_ctx *ctx;
+ struct barectf_platform_callbacks cbs = {
+ .default_clock_get_value = get_clock,
+ .is_backend_full = is_backend_full,
+ .open_packet = open_packet,
+ .close_packet = close_packet,
+ };
+
+ ctx = malloc(sizeof(*ctx));
+
+ if (!ctx) {
+ return NULL;
+ }
+
+ buf = malloc(buf_size);
+
+ if (!buf) {
+ free(ctx);
+ return NULL;
+ }
+
+ sprintf(stream_path, "%s/stream", trace_dir);
+ ctx->fh = fopen(stream_path, "wb");
+
+ if (!ctx->fh) {
+ free(ctx);
+ free(buf);
+ return NULL;
+ }
+
+ ctx->simulate_full_backend = simulate_full_backend;
+ ctx->full_backend_rand_lt = full_backend_rand_lt;
+ ctx->full_backend_rand_max = full_backend_rand_max;
+
+ barectf_init(&ctx->ctx, buf, buf_size, cbs, ctx);
+ open_packet(ctx);
+
+ return ctx;
+}
+
+void barectf_platform_linux_fs_fini(struct barectf_platform_linux_fs_ctx *ctx)
+{
+ if (barectf_packet_is_open(&ctx->ctx) &&
+ !barectf_packet_is_empty(&ctx->ctx)) {
+ close_packet(ctx);
+ }
+
+ fclose(ctx->fh);
+ free(barectf_packet_buf(&ctx->ctx));
+ free(ctx);
+}
+
+struct barectf_default_ctx *barectf_platform_linux_fs_get_barectf_ctx(
+ struct barectf_platform_linux_fs_ctx *ctx)
+{
+ return &ctx->ctx;
+}
--- /dev/null
+#ifndef _BARECTF_PLATFORM_LINUX_FS_H
+#define _BARECTF_PLATFORM_LINUX_FS_H
+
+/*
+ * barectf linux-fs platform
+ *
+ * Copyright (c) 2015 EfficiOS Inc. and Linux Foundation
+ * Copyright (c) 2015 Philippe Proulx <pproulx@efficios.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <stdint.h>
+#include <barectf.h>
+
+struct barectf_platform_linux_fs_ctx;
+
+/**
+ * Initializes the platform.
+ *
+ * @param buf_size Packet size (bytes)
+ * @param trace_dir Trace directory
+ * @param simulate_full_backend 1 to simulate a full back-end sometimes
+ * @param full_backend_rand_lt Back-end will be "full" when a random
+ * value is lower than this parameter
+ * if \p simulate_full_backend is 1
+ * @param full_backend_rand_max Maximum random value for full back-end
+ * simulation when \p simulate_full_backend
+ * is 1
+ * @returns Platform context
+ */
+struct barectf_platform_linux_fs_ctx *barectf_platform_linux_fs_init(
+ unsigned int buf_size, const char *trace_dir, int simulate_full_backend,
+ unsigned int full_backend_rand_max, unsigned int full_backend_rand_lt);
+
+/**
+ * Finalizes the platform.
+ *
+ * @param ctx Platform context
+ */
+void barectf_platform_linux_fs_fini(struct barectf_platform_linux_fs_ctx *ctx);
+
+/**
+ * Returns the barectf stream-specific context of a given platform context.
+ *
+ * This context is what barectf tracing functions need.
+ *
+ * @param ctx Platform context
+ * @returns barectf stream-specific context
+ */
+struct barectf_default_ctx *barectf_platform_linux_fs_get_barectf_ctx(
+ struct barectf_platform_linux_fs_ctx *ctx);
+
+#endif /* _BARECTF_PLATFORM_LINUX_FS_H */
--- /dev/null
+# barectf Parallella platform
+
+This platform targets the [Parallella](http://parallella.org/) system.
+
+This platform implements a ring buffer of packets in shared memory
+between the Epiphany cores and the ARM host. A consumer application
+on the host side is responsible for consuming the packets produced by
+the Epiphany cores and for writing them to the file system.
+
+
+## Requirements
+
+ * Possessing a Parallella board
+ * ESDK 2015.1
+ * barectf prefix: `barectf_`
+ * A single stream named `default`
+ * One clock named `default`, returning `uint64_t`, and having a
+ frequency of 1000000000 Hz
+
+The `default` stream must have in its packet context two unsigned
+integers with a size of at least 6 bits named `row` and `col` which will
+hold the row and column numbers of the Epiphany core producing this
+packet.
+
+Example of packet context:
+
+```yaml
+class: struct
+fields:
+ timestamp_begin: clock_int
+ timestamp_end: clock_int
+ packet_size: uint32
+ content_size: uint32
+ events_discarded: uint32
+ row: uint6
+ col: uint6
+```
+
+
+## Files
+
+ * `barectf-platform-parallella.h`: include this in your application
+ running on Epiphany cores
+ * `barectf-platform-parallella-config.h`: platform parameters
+ * `barectf-platform-parallella-common.h`: definitions, data
+ structures, and functions shared by the platform and the consumer
+ application
+ * `barectf-platform-parallella.c`: link your application with this
+ * `consumer/consumer.c`: consumer application
+ * `consumer/Makefile`: consumer application Makefile
+
+## Using
+
+### Platform API
+
+See [`barectf-platform-parallella.h`](barectf-platform-parallella.h).
+
+
+### Consumer application
+
+#### Building
+
+Do:
+
+ make
+
+in the [`consumer`](consumer) directory to build the consumer
+application.
+
+The optional `CROSS_COMPILE` environment variable specifies a
+cross-compiling toolchain prefix.
+
+
+#### Running
+
+Accepted arguments are:
+
+ * `-v`: enable verbose mode
+ * Unnamed argument: output directory of stream files (default: `ctf`)
+
+Example:
+
+ ./consumer -v /path/to/my-trace
+
+The output directory should also contain the `metadata` file produced
+by the `barectf` command-line tool to form a complete CTF trace.
+
+Start the consumer application _before_ starting the Epiphany cores
+running the platform and your application. To make sure your Epiphany
+application is not running, use the `e-reset` command.
+
+Stop the consumer application by killing it with the `SIGINT` signal
+(Ctrl+C). Stop the consumer application _before_ resetting the
+platform with `e-reset` (once the Epiphany application is started).
+When killed with `SIGINT`, the consumer application will finish writing
+any incomplete packet, then quit.
--- /dev/null
+#ifndef _BARECTF_PLATFORM_PARALLELLA_COMMON_H
+#define _BARECTF_PLATFORM_PARALLELLA_COMMON_H
+
+/*
+ * barectf Parallella platform
+ *
+ * Copyright (c) 2015 EfficiOS Inc. and Linux Foundation
+ * Copyright (c) 2015 Philippe Proulx <pproulx@efficios.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include "barectf-platform-parallella-config.h"
+
+struct ringbuf {
+ uint32_t consumer_index;
+ uint32_t producer_index;
+ uint8_t packets[RINGBUF_SZ][PACKET_SZ];
+
+#ifdef DEBUG
+ char error_buf[256];
+#endif
+};
+
+#define CORES_COUNT (CORES_ROWS * CORES_COLS)
+#define SMEM_SZ (sizeof(struct ringbuf) * CORES_COUNT)
+
+static inline unsigned int rowcol2index(unsigned int row, unsigned int col)
+{
+ return row * CORES_COLS + col;
+}
+
+static inline volatile struct ringbuf *get_ringbuf(void *base,
+ unsigned int row, unsigned int col)
+{
+ unsigned int index = rowcol2index(row, col);
+ volatile struct ringbuf *ringbufs = (struct ringbuf *) base;
+
+ return &ringbufs[index];
+}
+
+#endif /* _BARECTF_PLATFORM_PARALLELLA_COMMON_H */
--- /dev/null
+#ifndef _BARECTF_PLATFORM_PARALLELLA_CONFIG_H
+#define _BARECTF_PLATFORM_PARALLELLA_CONFIG_H
+
+/* barectf Parallella platform parameters */
+
+/* cores/row (4 for the Parallella) */
+#define CORES_ROWS 4
+
+/* cores/row (4 for the Parallella) */
+#define CORES_COLS 4
+
+/* packet size (must be a power of two) */
+#ifndef PACKET_SZ
+#define PACKET_SZ 256
+#endif
+
+/* ring buffer size (at least 2) */
+#ifndef RINGBUF_SZ
+#define RINGBUF_SZ 4
+#endif
+
+/* shared memory region name */
+#ifndef SMEM_NAME
+#define SMEM_NAME "barectf-tracing"
+#endif
+
+/* backend check timeout (cycles) */
+#ifndef BACKEND_CHECK_TIMEOUT
+#define BACKEND_CHECK_TIMEOUT (10000000ULL)
+#endif
+
+/* consumer poll delay (µs) */
+#ifndef CONSUMER_POLL_DELAY
+#define CONSUMER_POLL_DELAY (5000)
+#endif
+
+#endif /* _BARECTF_PLATFORM_PARALLELLA_CONFIG_H */
--- /dev/null
+/*
+ * barectf Parallella platform
+ *
+ * Copyright (c) 2015 EfficiOS Inc. and Linux Foundation
+ * Copyright (c) 2015 Philippe Proulx <pproulx@efficios.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <stdint.h>
+#include <stdio.h>
+#include <string.h>
+#include <e_lib.h>
+
+#include "barectf-platform-parallella-common.h"
+#include "barectf-platform-parallella.h"
+#include "barectf.h"
+
+static struct tracing_ctx {
+ struct barectf_default_ctx barectf_ctx;
+ volatile struct ringbuf *ringbuf;
+ uint64_t last_backend_check;
+ uint64_t clock_high;
+ e_memseg_t smem;
+
+ /*
+ * Use a producer index's shadow in local memory to avoid
+ * reading the old value of the producer index in shared memory
+ * after having incremented it.
+ *
+ * NEVER read or write the producer index or its shadow
+ * directly: always use get_prod_index() and incr_prod_index().
+ */
+ uint32_t producer_index_shadow;
+
+ unsigned int row, col;
+ uint8_t local_packet[PACKET_SZ];
+ uint8_t initialized;
+ uint8_t backend_wait_period;
+} g_tracing_ctx;
+
+struct barectf_default_ctx *tracing_get_barectf_ctx(void)
+{
+ return &g_tracing_ctx.barectf_ctx;
+}
+
+static inline void incr_prod_index(struct tracing_ctx *tracing_ctx)
+{
+ tracing_ctx->producer_index_shadow++;
+ tracing_ctx->ringbuf->producer_index =
+ tracing_ctx->producer_index_shadow;
+}
+
+static inline uint32_t get_prod_index(struct tracing_ctx *tracing_ctx)
+{
+ return tracing_ctx->producer_index_shadow;
+}
+
+static uint64_t get_clock(void* data)
+{
+ struct tracing_ctx *tracing_ctx = data;
+
+ uint64_t low = (uint64_t) ((uint32_t) -e_ctimer_get(E_CTIMER_1));
+
+ return tracing_ctx->clock_high | low;
+}
+
+static int is_backend_full(void *data)
+{
+ struct tracing_ctx *tracing_ctx = data;
+ int check_shared = 0;
+ int full;
+
+ /* are we in a back-end checking waiting period? */
+ if (tracing_ctx->backend_wait_period) {
+ /* yes: check if we may check in shared memory now */
+ uint64_t cur_clock = get_clock(data);
+
+ if (cur_clock - tracing_ctx->last_backend_check >=
+ BACKEND_CHECK_TIMEOUT) {
+ /* check in shared memory */
+ check_shared = 1;
+ tracing_ctx->last_backend_check = cur_clock;
+ }
+ } else {
+ /* no: check in shared memory */
+ check_shared = 1;
+ }
+
+ if (check_shared) {
+ full = (get_prod_index(tracing_ctx) -
+ tracing_ctx->ringbuf->consumer_index) == RINGBUF_SZ;
+ tracing_ctx->backend_wait_period = full;
+
+ if (full) {
+ tracing_ctx->last_backend_check = get_clock(data);
+ }
+ } else {
+ /* no shared memory checking: always considered full */
+ full = 1;
+ }
+
+ return full;
+}
+
+static void open_packet(void *data)
+{
+ struct tracing_ctx *tracing_ctx = data;
+
+ barectf_default_open_packet(&tracing_ctx->barectf_ctx,
+ tracing_ctx->row, tracing_ctx->col);
+}
+
+static void close_packet(void *data)
+{
+ struct tracing_ctx *tracing_ctx = data;
+ void *dst;
+ unsigned int index;
+
+ /* close packet now */
+ barectf_default_close_packet(&tracing_ctx->barectf_ctx);
+
+ /*
+ * We know for sure that there is space in the back-end (ring
+ * buffer) for this packet, so "upload" it to shared memory now.
+ */
+ index = get_prod_index(tracing_ctx) & (RINGBUF_SZ - 1);
+ dst = (void *) tracing_ctx->ringbuf->packets[index];
+ memcpy(dst, tracing_ctx->local_packet, PACKET_SZ);
+
+ /* update producer index after copy */
+ incr_prod_index(tracing_ctx);
+}
+
+static struct barectf_platform_callbacks cbs = {
+ .default_clock_get_value = get_clock,
+ .is_backend_full = is_backend_full,
+ .open_packet = open_packet,
+ .close_packet = close_packet,
+};
+
+static void __attribute__((interrupt)) timer1_trace_isr()
+{
+ /* CTIMER1 reaches 0: reset to max value and start */
+ g_tracing_ctx.clock_high += (1ULL << 32);
+ e_ctimer_set(E_CTIMER_1, E_CTIMER_MAX);
+ e_ctimer_start(E_CTIMER_1, E_CTIMER_CLK);
+ return;
+}
+
+static void init_clock(void)
+{
+ /* stop and reset CTIMER1 */
+ e_ctimer_stop(E_CTIMER_1);
+ e_ctimer_set(E_CTIMER_1, E_CTIMER_MAX);
+ g_tracing_ctx.clock_high = 0;
+
+ /* enable CTIMER1 interrupt */
+ e_irq_global_mask(E_FALSE);
+ e_irq_attach(E_TIMER1_INT, timer1_trace_isr);
+ e_irq_mask(E_TIMER1_INT, E_FALSE);
+}
+
+static void stop_clock(void)
+{
+ e_ctimer_stop(E_CTIMER_1);
+ e_irq_mask(E_TIMER1_INT, E_TRUE);
+}
+
+void tracing_reset_clock(void)
+{
+ e_ctimer_set(E_CTIMER_1, E_CTIMER_MAX);
+ g_tracing_ctx.clock_high = 0;
+ g_tracing_ctx.backend_wait_period = 0;
+ g_tracing_ctx.last_backend_check = 0;
+ e_ctimer_start(E_CTIMER_1, E_CTIMER_CLK);
+}
+
+int tracing_init(void)
+{
+ e_coreid_t coreid;
+
+ if (g_tracing_ctx.initialized) {
+ /* already initialized */
+ return 0;
+ }
+
+ barectf_init(&g_tracing_ctx.barectf_ctx,
+ g_tracing_ctx.local_packet, PACKET_SZ, cbs, &g_tracing_ctx);
+
+ /* zero local packet */
+ memset(g_tracing_ctx.local_packet, 0, PACKET_SZ);
+
+ /* attach to shared memory */
+ if (e_shm_attach(&g_tracing_ctx.smem, SMEM_NAME) != E_OK) {
+ return -1;
+ }
+
+ /* get core's row and column */
+ coreid = e_get_coreid();
+ e_coords_from_coreid(coreid, &g_tracing_ctx.row, &g_tracing_ctx.col);
+
+ /* get core's ring buffer */
+ g_tracing_ctx.ringbuf =
+ get_ringbuf((void *) g_tracing_ctx.smem.ephy_base,
+ g_tracing_ctx.row, g_tracing_ctx.col);
+
+ /* initialize tracing clock */
+ init_clock();
+
+ /* start tracing clock */
+ tracing_reset_clock();
+
+ /* open first packet */
+ open_packet(&g_tracing_ctx);
+
+ return 0;
+}
+
+void tracing_fini(void)
+{
+ if (!g_tracing_ctx.initialized) {
+ /* not initialized yet */
+ return;
+ }
+
+ /* close last packet if open and not empty */
+ if (barectf_packet_is_open(&g_tracing_ctx.barectf_ctx) &&
+ !barectf_packet_is_empty(&g_tracing_ctx.barectf_ctx)) {
+ close_packet(&g_tracing_ctx);
+ }
+
+ /* stop CTIMER1 */
+ stop_clock();
+}
--- /dev/null
+#ifndef _BARECTF_PLATFORM_PARALLELLA_H
+#define _BARECTF_PLATFORM_PARALLELLA_H
+
+/*
+ * barectf Parallella platform
+ *
+ * Copyright (c) 2015 EfficiOS Inc. and Linux Foundation
+ * Copyright (c) 2015 Philippe Proulx <pproulx@efficios.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include "barectf.h"
+
+/**
+ * Initializes the platform.
+ */
+int tracing_init(void);
+
+/**
+ * Returns the barectf context to be used with tracing functions.
+ */
+struct barectf_default_ctx *tracing_get_barectf_ctx(void);
+
+/**
+ * Resets the tracing clock to an absolute 0.
+ *
+ * This should be used immediately after a synchronization operation
+ * on all executed Epiphany cores.
+ */
+void tracing_reset_clock(void);
+
+/**
+ * Finalizes the platform.
+ */
+void tracing_fini(void);
+
+#endif /* _BARECTF_PLATFORM_PARALLELLA_H */
--- /dev/null
+RM = rm -f
+CC = $(CROSS_COMPILE)gcc
+LD = $(CC)
+
+ESDK=$(EPIPHANY_HOME)
+
+CFLAGS = -O2 -std=c99 -I.. -I"$(ESDK)/tools/host/include"
+LDFLAGS = -L"$(ESDK)/tools/host/lib" -le-hal
+
+TARGET = consumer
+OBJS = $(TARGET).o
+
+.PHONY: all clean
+
+all: $(TARGET)
+
+$(TARGET): $(OBJS)
+ $(LD) -o $@ $^ $(LDFLAGS)
+
+$(TARGET).o: $(TARGET).c
+ $(CC) $(CFLAGS) -c $<
+
+clean:
+ $(RM) $(TARGET) $(OBJS)
--- /dev/null
+/*
+ * barectf Parallella platform: consumer application
+ *
+ * Copyright (c) 2015 EfficiOS Inc. and Linux Foundation
+ * Copyright (c) 2015 Philippe Proulx <pproulx@efficios.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#define _BSD_SOURCE
+#include <unistd.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <assert.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <signal.h>
+#include <time.h>
+#include <errno.h>
+#include <e-hal.h>
+
+#include "barectf-platform-parallella-common.h"
+
+#define TARGET_EXECUTABLE_FILENAME "./e_barectf_tracing_2.srec"
+#define mb() __asm__ __volatile__("dmb" : : : "memory")
+
+struct ctx {
+ e_mem_t ringbufs_smem;
+ int stream_fds[CORES_COUNT];
+ const char *trace_dir;
+ int verbose;
+};
+
+static volatile int quit = 0;
+
+static void sig_handler(int signo)
+{
+ if (signo == SIGINT) {
+ quit = 1;
+ fprintf(stderr, "\nGot SIGINT: quitting\n");
+ }
+}
+
+static int try_consume_core_packet(struct ctx *ctx, unsigned int row,
+ unsigned int col)
+{
+ int stream_fd;
+ size_t remaining;
+ uint32_t producer_index;
+ uint32_t consumer_index;
+ uint32_t cons_packet_index;
+ volatile uint8_t *packet_src;
+ unsigned int index = rowcol2index(row, col);
+ volatile struct ringbuf *ringbuf =
+ get_ringbuf(ctx->ringbufs_smem.base, row, col);
+
+#ifdef DEBUG
+ if (ringbuf->error_buf[0]) {
+ printf("[%u, %u] %s\n", row, col, ringbuf->error_buf);
+ }
+#endif /* DEBUG */
+
+ consumer_index = ringbuf->consumer_index;
+ producer_index = ringbuf->producer_index;
+
+ if (producer_index <= consumer_index) {
+ return 0;
+ }
+
+ /* order producer index reading before packet reading */
+ mb();
+
+ /* index of first full packet within ring buffer */
+ cons_packet_index = consumer_index & (RINGBUF_SZ - 1);
+
+ /* full packet data */
+ packet_src = ringbuf->packets[cons_packet_index];
+
+ /* append packet to stream file */
+ remaining = PACKET_SZ;
+
+ if (ctx->verbose) {
+ printf("Consuming one packet from ring buffer of core (%u, %u):\n",
+ row, col);
+ printf(" Producer index: %u\n", producer_index);
+ printf(" Consumer index: %u\n", consumer_index);
+ printf(" Consumer packet index: %u\n", cons_packet_index);
+ }
+
+ stream_fd = ctx->stream_fds[index];
+
+ for (;;) {
+ ssize_t write_ret;
+
+ write_ret = write(stream_fd,
+ (uint8_t *) packet_src + (PACKET_SZ - remaining),
+ remaining);
+ assert(write_ret != 0);
+
+ if (write_ret > 0) {
+ remaining -= write_ret;
+ } else if (write_ret == -1) {
+ if (errno != EINTR) {
+ /* other error */
+ fprintf(stderr, "Error: failed to write packet of core (%u, %u):\n",
+ row, col);
+ perror("write");
+ return -1;
+ }
+ }
+
+ if (remaining == 0) {
+ break;
+ }
+ }
+
+ /* order packet reading before consumer index increment */
+ mb();
+
+ /* packet is consumed: update consumer index now */
+ ringbuf->consumer_index = consumer_index + 1;
+
+ return 0;
+}
+
+static int consume(struct ctx *ctx)
+{
+ int row, col, ret;
+
+ if (ctx->verbose) {
+ printf("Starting consumer\n");
+ }
+
+ for (;;) {
+ if (quit) {
+ return 0;
+ }
+
+ for (row = 0; row < CORES_ROWS; ++row) {
+ for (col = 0; col < CORES_COLS; ++col) {
+ if (quit) {
+ return 0;
+ }
+
+ ret = try_consume_core_packet(ctx, row, col);
+
+ if (ret) {
+ return ret;
+ }
+ }
+ }
+
+ /* busy-wait before the next check */
+ if (usleep(CONSUMER_POLL_DELAY) == -1) {
+ if (errno != EINTR) {
+ return -1;
+ }
+ }
+ }
+}
+
+static void zero_ringbufs(struct ctx *ctx)
+{
+ memset(ctx->ringbufs_smem.base, 0, SMEM_SZ);
+}
+
+static void close_stream_fds(struct ctx *ctx)
+{
+ int i;
+
+ for (i = 0; i < CORES_COUNT; ++i) {
+ if (ctx->stream_fds[i] >= 0) {
+ int fd = ctx->stream_fds[i];
+
+ if (close(fd) == -1) {
+ fprintf(stderr,
+ "Error: could not close FD %d:\n", fd);
+ perror("close");
+ }
+
+ ctx->stream_fds[i] = -1;
+ }
+ }
+}
+
+static int open_stream_fd(struct ctx *ctx, unsigned int row, unsigned int col)
+{
+ char filename[128];
+ unsigned int index = rowcol2index(row, col);
+
+ sprintf(filename, "%s/stream-%u-%u", ctx->trace_dir,
+ row, col);
+ ctx->stream_fds[index] =
+ open(filename, O_CREAT | O_WRONLY, 0644);
+
+ if (ctx->stream_fds[index] == -1) {
+ fprintf(stderr, "Error: could not open \"%s\" for writing\n",
+ filename);
+ close_stream_fds(ctx);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int open_stream_fds(struct ctx *ctx)
+{
+ unsigned int row, col;
+
+ for (row = 0; row < CORES_ROWS; ++row) {
+ for (col = 0; col < CORES_COLS; ++col) {
+ int ret = open_stream_fd(ctx, row, col);
+
+ if (ret) {
+ return ret;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static void init_stream_fds(struct ctx *ctx)
+{
+ unsigned int row, col;
+
+ for (row = 0; row < CORES_ROWS; ++row) {
+ for (col = 0; col < CORES_COLS; ++col) {
+ ctx->stream_fds[rowcol2index(row, col)] = -1;
+ }
+ }
+}
+
+static int init(struct ctx *ctx)
+{
+ int ret = 0;
+
+ e_set_host_verbosity(H_D0);
+
+ if (ctx->verbose) {
+ printf("Initializing HAL\n");
+ }
+
+ if (e_init(NULL) != E_OK) {
+ fprintf(stderr, "Error: Epiphany HAL initialization failed\n");
+ ret = -1;
+ goto error;
+ }
+
+ if (ctx->verbose) {
+ printf("HAL initialized\n");
+ printf("Allocating %u bytes of shared memory in region \"%s\"\n",
+ SMEM_SZ, SMEM_NAME);
+ }
+
+ ret = e_shm_alloc(&ctx->ringbufs_smem, SMEM_NAME, SMEM_SZ) != E_OK;
+
+ if (ret != E_OK) {
+ if (ctx->verbose) {
+ printf("Attaching to shared memory region \"%s\"\n",
+ SMEM_NAME);
+ }
+
+ ret = e_shm_attach(&ctx->ringbufs_smem, SMEM_NAME);
+ }
+
+ if (ret != E_OK) {
+ fprintf(stderr, "Error: failed to attach to shared memory: %s\n",
+ strerror(errno));
+ ret = -1;
+ goto error_finalize;
+ }
+
+ zero_ringbufs(ctx);
+
+ if (ctx->verbose) {
+ printf("Creating CTF stream files in \"%s\"\n", ctx->trace_dir);
+ }
+
+ init_stream_fds(ctx);
+
+ if (open_stream_fds(ctx)) {
+ fprintf(stderr, "Error: failed to create CTF streams\n");
+ ret = -1;
+ goto error_finalize;
+ }
+
+ return 0;
+
+error_finalize:
+ if (ctx->ringbufs_smem.base) {
+ e_shm_release(SMEM_NAME);
+ }
+
+ e_finalize();
+
+error:
+ return ret;
+}
+
+static void fini(struct ctx *ctx)
+{
+ if (ctx->verbose) {
+ printf("Closing CTF stream files\n");
+ }
+
+ close_stream_fds(ctx);
+
+ if (ctx->verbose) {
+ printf("Releasing shared memory region \"%s\"\n", SMEM_NAME);
+ }
+
+ e_shm_release(SMEM_NAME);
+
+ if (ctx->verbose) {
+ printf("Finalizing HAL\n");
+ }
+
+ e_finalize();
+}
+
+static int parse_arguments(int argc, char *argv[], struct ctx *ctx)
+{
+ int i;
+
+ if (argc > 3) {
+ fprintf(stderr,
+ "Error: the only accepted arguments are -v and a trace directory path\n");
+ return -1;
+ }
+
+ for (i = 1; i < argc; ++i) {
+ const char *arg = argv[i];
+
+ if (strcmp(arg, "-v") == 0) {
+ ctx->verbose = 1;
+ } else {
+ ctx->trace_dir = arg;
+ }
+ }
+
+ if (!ctx->trace_dir) {
+ ctx->trace_dir = "ctf";
+ }
+
+ return 0;
+}
+
+int main(int argc, char *argv[])
+{
+ int ret = 0;
+ struct ctx ctx;
+
+ if (signal(SIGINT, sig_handler) == SIG_ERR) {
+ fprintf(stderr, "Error: failed to register SIGINT handler\n");
+ ret = 1;
+ goto end;
+ }
+
+ memset(&ctx, 0, sizeof(ctx));
+
+ if (parse_arguments(argc, argv, &ctx)) {
+ ret = 1;
+ goto end;
+ }
+
+ if (init(&ctx)) {
+ ret = 1;
+ goto end;
+ }
+
+ if (consume(&ctx)) {
+ ret = 1;
+ goto end_fini;
+ }
+
+end_fini:
+ fini(&ctx);
+
+end:
+ return ret;
+}
#
# The MIT License (MIT)
#
-# Copyright (c) 2014 Philippe Proulx <philippe.proulx@efficios.com>
+# Copyright (c) 2014-2015 Philippe Proulx <pproulx@efficios.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
-import os
-import sys
-import subprocess
-from setuptools import setup
-
-
-# make sure we run Python 3+ here
-v = sys.version_info
-
-if v.major < 3:
- sys.stderr.write('Sorry, barectf needs Python 3\n')
- sys.exit(1)
+from setuptools import setup
+import sys
-install_requires = [
- 'termcolor',
- 'pytsdl',
-]
+def _check_python3():
+ # make sure we run Python 3+ here
+ v = sys.version_info
-packages = [
- 'barectf',
-]
+ if v.major < 3:
+ sys.stderr.write('Sorry, barectf needs Python 3\n')
+ sys.exit(1)
-entry_points = {
- 'console_scripts': [
- 'barectf = barectf.cli:run'
- ],
-}
+_check_python3()
setup(name='barectf',
- version='0.3.1',
+ version='2.0.0',
description='Generator of C99 code that can write native CTF',
author='Philippe Proulx',
author_email='eeppeliteloop@gmail.com',
license='MIT',
keywords='ctf generator tracing bare-metal bare-machine',
url='https://github.com/efficios/barectf',
- packages=packages,
- install_requires=install_requires,
- entry_points=entry_points)
+ packages=[
+ 'barectf',
+ ],
+ install_requires=[
+ 'termcolor',
+ 'pyyaml',
+ ],
+ entry_points={
+ 'console_scripts': [
+ 'barectf = barectf.cli:run'
+ ],
+ })