Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change timestamp data type to unixnano in Metrics Protobuf definitions #33

Conversation

tigrannajaryan
Copy link
Member

This change applies the refinement approach that is already performed on Traces
Protobuf definitions as part of open-telemetry/oteps#59 and
which proved to yield significant performance improvements.

I replaced google.protobuf.Timestamp by int64 time in unix epoch nanoseconds.

Simple benchmark in Go demonstrates the following improvement of encoding and decoding
compared to the current state:

===== Encoded sizes
Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricOne              20000 bytes  [1.000], gziped 1506 bytes  [1.000]
Proposed/MetricOne              18250 bytes  [1.096], gziped 1433 bytes  [1.051]

Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricSeries           51797 bytes  [1.000], gziped 6455 bytes  [1.000]
Proposed/MetricSeries           43047 bytes  [1.203], gziped 6093 bytes  [1.059]

goos: darwin
goarch: amd64
pkg: github.com/tigrannajaryan/exp-otelproto/encodings
BenchmarkEncode/Baseline/MetricOne-8         	      30	 186998840 ns/op
BenchmarkEncode/Proposed/MetricOne-8         	      36	 166668705 ns/op

BenchmarkEncode/Baseline/MetricSeries-8      	       8	 632391842 ns/op
BenchmarkEncode/Proposed/MetricSeries-8      	      10	 537384515 ns/op

BenchmarkDecode/Baseline/MetricOne-8         	      16	 348156010 ns/op	171896049 B/op	 4974000 allocs/op
BenchmarkDecode/Proposed/MetricOne-8         	      19	 314727259 ns/op	155096036 B/op	 4624000 allocs/op

BenchmarkDecode/Baseline/MetricSeries-8      	       5	1013035422 ns/op	440696048 B/op	11874000 allocs/op
BenchmarkDecode/Proposed/MetricSeries-8      	       6	 846887981 ns/op	356696040 B/op	10124000 allocs/op

It is 10-15% faster and is 10-20% smaller on the wire and in memory.

Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series
and one Histogram of doubles with 1 time series and single bucket. Each time series for
both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries).
Both metrics have 2 labels.

Benchmark source code is available at:
https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go

This change applies the refinement approach that is already performed on Traces
Protobuf definitions as part of open-telemetry/oteps#59 and
which proved to yield significant performance improvements.

I replaced google.protobuf.Timestamp by int64 time in unix epoch nanoseconds.

Simple benchmark in Go demonstrates the following improvement of encoding and decoding
compared to the current state:

```
===== Encoded sizes
Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricOne              20000 bytes  [1.000], gziped 1506 bytes  [1.000]
Proposed/MetricOne              18250 bytes  [1.096], gziped 1433 bytes  [1.051]

Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricSeries           51797 bytes  [1.000], gziped 6455 bytes  [1.000]
Proposed/MetricSeries           43047 bytes  [1.203], gziped 6093 bytes  [1.059]

goos: darwin
goarch: amd64
pkg: github.com/tigrannajaryan/exp-otelproto/encodings
BenchmarkEncode/Baseline/MetricOne-8         	      30	 186998840 ns/op
BenchmarkEncode/Proposed/MetricOne-8         	      36	 166668705 ns/op

BenchmarkEncode/Baseline/MetricSeries-8      	       8	 632391842 ns/op
BenchmarkEncode/Proposed/MetricSeries-8      	      10	 537384515 ns/op

BenchmarkDecode/Baseline/MetricOne-8         	      16	 348156010 ns/op	171896049 B/op	 4974000 allocs/op
BenchmarkDecode/Proposed/MetricOne-8         	      19	 314727259 ns/op	155096036 B/op	 4624000 allocs/op

BenchmarkDecode/Baseline/MetricSeries-8      	       5	1013035422 ns/op	440696048 B/op	11874000 allocs/op
BenchmarkDecode/Proposed/MetricSeries-8      	       6	 846887981 ns/op	356696040 B/op	10124000 allocs/op
```

It is 10-15% faster and is 10-20% smaller on the wire and in memory.

Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series
and one Histogram of doubles with 1 time series and single bucket. Each time series for
both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries).
Both metrics have 2 labels.

Benchmark source code is available at:
https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
@tigrannajaryan
Copy link
Member Author

Please merge.
As a Spec SIG approver should I have merge rights on this repo?

@bogdandrutu
Copy link
Member

@tigrannajaryan we do apply the same rules as we do for all the repos, only maintainers of the repo are allowed to merge, and in this case TC members are the maintainers.

tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is
unnecessary flexibility because points in the same timeseries cannot be of different
data type. This also costed performance.

Now we have separate timeseries message definitions for each data type and the
timeseries used is defined by the oneof entry in the Metric message.

This change is stacked on top of open-telemetry#33

Simple benchmark in Go demonstrates the following improvement of encoding and decoding
compared to the baseline state:

```
===== Encoded sizes
Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricOne              24200 bytes  [1.000], gziped 1804 bytes  [1.000]
Proposed/MetricOne              19400 bytes  [1.247], gziped 1626 bytes  [1.109]

Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricSeries           56022 bytes  [1.000], gziped 6655 bytes  [1.000]
Proposed/MetricSeries           43415 bytes  [1.290], gziped 6422 bytes  [1.036]

goos: darwin
goarch: amd64
pkg: github.com/tigrannajaryan/exp-otelproto/encodings
BenchmarkEncode/Baseline/MetricOne-8         	      27	 207923054 ns/op
BenchmarkEncode/Proposed/MetricOne-8         	      44	 133984867 ns/op

BenchmarkEncode/Baseline/MetricSeries-8      	       8	 649581262 ns/op
BenchmarkEncode/Proposed/MetricSeries-8      	      18	 324559562 ns/op

BenchmarkDecode/Baseline/MetricOne-8         	      15	 379468217 ns/op	186296043 B/op	 5274000 allocs/op
BenchmarkDecode/Proposed/MetricOne-8         	      21	 278470120 ns/op	155896034 B/op	 4474000 allocs/op

BenchmarkDecode/Baseline/MetricSeries-8      	       5	1041719362 ns/op	455096051 B/op	12174000 allocs/op
BenchmarkDecode/Proposed/MetricSeries-8      	       9	 603392754 ns/op	338296035 B/op	 8574000 allocs/op
```

It is 30-50% faster and is 20-25% smaller on the wire and in memory.

Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series
and one Histogram of doubles with 1 time series and single bucket. Each time series for
both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries).
Both metrics have 2 labels.

Benchmark source code is available at:
https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is
unnecessary flexibility because points in the same timeseries cannot be of different
data type. This also costed performance.

Now we have separate timeseries message definitions for each data type and the
timeseries used is defined by the oneof entry in the Metric message.

This change is stacked on top of open-telemetry#33

Simple benchmark in Go demonstrates the following improvement of encoding and decoding
compared to the baseline state:

```
===== Encoded sizes
Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricOne              24200 bytes  [1.000], gziped 1804 bytes  [1.000]
Proposed/MetricOne              19400 bytes  [1.247], gziped 1626 bytes  [1.109]

Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricSeries           56022 bytes  [1.000], gziped 6655 bytes  [1.000]
Proposed/MetricSeries           43415 bytes  [1.290], gziped 6422 bytes  [1.036]

goos: darwin
goarch: amd64
pkg: github.com/tigrannajaryan/exp-otelproto/encodings
BenchmarkEncode/Baseline/MetricOne-8         	      27	 207923054 ns/op
BenchmarkEncode/Proposed/MetricOne-8         	      44	 133984867 ns/op

BenchmarkEncode/Baseline/MetricSeries-8      	       8	 649581262 ns/op
BenchmarkEncode/Proposed/MetricSeries-8      	      18	 324559562 ns/op

BenchmarkDecode/Baseline/MetricOne-8         	      15	 379468217 ns/op	186296043 B/op	 5274000 allocs/op
BenchmarkDecode/Proposed/MetricOne-8         	      21	 278470120 ns/op	155896034 B/op	 4474000 allocs/op

BenchmarkDecode/Baseline/MetricSeries-8      	       5	1041719362 ns/op	455096051 B/op	12174000 allocs/op
BenchmarkDecode/Proposed/MetricSeries-8      	       9	 603392754 ns/op	338296035 B/op	 8574000 allocs/op
```

It is 30-50% faster and is 20-25% smaller on the wire and in memory.

Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series
and one Histogram of doubles with 1 time series and single bucket. Each time series for
both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries).
Both metrics have 2 labels.

Benchmark source code is available at:
https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is
unnecessary flexibility because points in the same timeseries cannot be of different
data type. This also costed performance.

Now we have separate timeseries message definitions for each data type and the
timeseries used is defined by the oneof entry in the Metric message.

This change is stacked on top of open-telemetry#33

Simple benchmark in Go demonstrates the following improvement of encoding and decoding
compared to the baseline state:

```
===== Encoded sizes
Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricOne              24200 bytes  [1.000], gziped 1804 bytes  [1.000]
Proposed/MetricOne              19400 bytes  [1.247], gziped 1626 bytes  [1.109]

Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricSeries           56022 bytes  [1.000], gziped 6655 bytes  [1.000]
Proposed/MetricSeries           43415 bytes  [1.290], gziped 6422 bytes  [1.036]

goos: darwin
goarch: amd64
pkg: github.com/tigrannajaryan/exp-otelproto/encodings
BenchmarkEncode/Baseline/MetricOne-8         	      27	 207923054 ns/op
BenchmarkEncode/Proposed/MetricOne-8         	      44	 133984867 ns/op

BenchmarkEncode/Baseline/MetricSeries-8      	       8	 649581262 ns/op
BenchmarkEncode/Proposed/MetricSeries-8      	      18	 324559562 ns/op

BenchmarkDecode/Baseline/MetricOne-8         	      15	 379468217 ns/op	186296043 B/op	 5274000 allocs/op
BenchmarkDecode/Proposed/MetricOne-8         	      21	 278470120 ns/op	155896034 B/op	 4474000 allocs/op

BenchmarkDecode/Baseline/MetricSeries-8      	       5	1041719362 ns/op	455096051 B/op	12174000 allocs/op
BenchmarkDecode/Proposed/MetricSeries-8      	       9	 603392754 ns/op	338296035 B/op	 8574000 allocs/op
```

It is 30-50% faster and is 20-25% smaller on the wire and in memory.

Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series
and one Histogram of doubles with 1 time series and single bucket. Each time series for
both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries).
Both metrics have 2 labels.

Benchmark source code is available at:
https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is
unnecessary flexibility because points in the same timeseries cannot be of different
data type. This also costed performance.

Now we have separate timeseries message definitions for each data type and the
timeseries used is defined by the oneof entry in the Metric message.

This change is stacked on top of open-telemetry#33

Simple benchmark in Go demonstrates the following improvement of encoding and decoding
compared to the baseline state:

```
===== Encoded sizes
Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricOne              24200 bytes  [1.000], gziped 1804 bytes  [1.000]
Proposed/MetricOne              19400 bytes  [1.247], gziped 1626 bytes  [1.109]

Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricSeries           56022 bytes  [1.000], gziped 6655 bytes  [1.000]
Proposed/MetricSeries           43415 bytes  [1.290], gziped 6422 bytes  [1.036]

goos: darwin
goarch: amd64
pkg: github.com/tigrannajaryan/exp-otelproto/encodings
BenchmarkEncode/Baseline/MetricOne-8         	      27	 207923054 ns/op
BenchmarkEncode/Proposed/MetricOne-8         	      44	 133984867 ns/op

BenchmarkEncode/Baseline/MetricSeries-8      	       8	 649581262 ns/op
BenchmarkEncode/Proposed/MetricSeries-8      	      18	 324559562 ns/op

BenchmarkDecode/Baseline/MetricOne-8         	      15	 379468217 ns/op	186296043 B/op	 5274000 allocs/op
BenchmarkDecode/Proposed/MetricOne-8         	      21	 278470120 ns/op	155896034 B/op	 4474000 allocs/op

BenchmarkDecode/Baseline/MetricSeries-8      	       5	1041719362 ns/op	455096051 B/op	12174000 allocs/op
BenchmarkDecode/Proposed/MetricSeries-8      	       9	 603392754 ns/op	338296035 B/op	 8574000 allocs/op
```

It is 30-50% faster and is 20-25% smaller on the wire and in memory.

Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series
and one Histogram of doubles with 1 time series and single bucket. Each time series for
both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries).
Both metrics have 2 labels.

Benchmark source code is available at:
https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
@bogdandrutu bogdandrutu merged commit 608c358 into open-telemetry:master Oct 30, 2019
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is
unnecessary flexibility because points in the same timeseries cannot be of different
data type. This also costed performance.

Now we have separate timeseries message definitions for each data type and the
timeseries used is defined by the oneof entry in the Metric message.

This change is stacked on top of open-telemetry#33

Simple benchmark in Go demonstrates the following improvement of encoding and decoding
compared to the baseline state:

```
===== Encoded sizes
Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricOne              24200 bytes  [1.000], gziped 1804 bytes  [1.000]
Proposed/MetricOne              19400 bytes  [1.247], gziped 1626 bytes  [1.109]

Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricSeries           56022 bytes  [1.000], gziped 6655 bytes  [1.000]
Proposed/MetricSeries           43415 bytes  [1.290], gziped 6422 bytes  [1.036]

goos: darwin
goarch: amd64
pkg: github.com/tigrannajaryan/exp-otelproto/encodings
BenchmarkEncode/Baseline/MetricOne-8         	      27	 207923054 ns/op
BenchmarkEncode/Proposed/MetricOne-8         	      44	 133984867 ns/op

BenchmarkEncode/Baseline/MetricSeries-8      	       8	 649581262 ns/op
BenchmarkEncode/Proposed/MetricSeries-8      	      18	 324559562 ns/op

BenchmarkDecode/Baseline/MetricOne-8         	      15	 379468217 ns/op	186296043 B/op	 5274000 allocs/op
BenchmarkDecode/Proposed/MetricOne-8         	      21	 278470120 ns/op	155896034 B/op	 4474000 allocs/op

BenchmarkDecode/Baseline/MetricSeries-8      	       5	1041719362 ns/op	455096051 B/op	12174000 allocs/op
BenchmarkDecode/Proposed/MetricSeries-8      	       9	 603392754 ns/op	338296035 B/op	 8574000 allocs/op
```

It is 30-50% faster and is 20-25% smaller on the wire and in memory.

Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series
and one Histogram of doubles with 1 time series and single bucket. Each time series for
both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries).
Both metrics have 2 labels.

Benchmark source code is available at:
https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
@tigrannajaryan tigrannajaryan deleted the feature/tigran/metrictimestamp branch October 30, 2019 20:29
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is
unnecessary flexibility because points in the same timeseries cannot be of different
data type. This also costed performance.

Now we have separate timeseries message definitions for each data type and the
timeseries used is defined by the oneof entry in the Metric message.

This change is stacked on top of open-telemetry#33

Simple benchmark in Go demonstrates the following improvement of encoding and decoding
compared to the baseline state:

```
===== Encoded sizes
Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricOne              24200 bytes  [1.000], gziped 1804 bytes  [1.000]
Proposed/MetricOne              19400 bytes  [1.247], gziped 1626 bytes  [1.109]

Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricSeries           56022 bytes  [1.000], gziped 6655 bytes  [1.000]
Proposed/MetricSeries           43415 bytes  [1.290], gziped 6422 bytes  [1.036]

goos: darwin
goarch: amd64
pkg: github.com/tigrannajaryan/exp-otelproto/encodings
BenchmarkEncode/Baseline/MetricOne-8         	      27	 207923054 ns/op
BenchmarkEncode/Proposed/MetricOne-8         	      44	 133984867 ns/op

BenchmarkEncode/Baseline/MetricSeries-8      	       8	 649581262 ns/op
BenchmarkEncode/Proposed/MetricSeries-8      	      18	 324559562 ns/op

BenchmarkDecode/Baseline/MetricOne-8         	      15	 379468217 ns/op	186296043 B/op	 5274000 allocs/op
BenchmarkDecode/Proposed/MetricOne-8         	      21	 278470120 ns/op	155896034 B/op	 4474000 allocs/op

BenchmarkDecode/Baseline/MetricSeries-8      	       5	1041719362 ns/op	455096051 B/op	12174000 allocs/op
BenchmarkDecode/Proposed/MetricSeries-8      	       9	 603392754 ns/op	338296035 B/op	 8574000 allocs/op
```

It is 30-50% faster and is 20-25% smaller on the wire and in memory.

Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series
and one Histogram of doubles with 1 time series and single bucket. Each time series for
both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries).
Both metrics have 2 labels.

Benchmark source code is available at:
https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is
unnecessary flexibility because points in the same timeseries cannot be of different
data type. This also costed performance.

Now we have separate timeseries message definitions for each data type and the
timeseries used is defined by the oneof entry in the Metric message.

This change is stacked on top of open-telemetry#33

Simple benchmark in Go demonstrates the following improvement of encoding and decoding
compared to the baseline state:

```
===== Encoded sizes
Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricOne              24200 bytes  [1.000], gziped 1804 bytes  [1.000]
Proposed/MetricOne              19400 bytes  [1.247], gziped 1626 bytes  [1.109]

Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricSeries           56022 bytes  [1.000], gziped 6655 bytes  [1.000]
Proposed/MetricSeries           43415 bytes  [1.290], gziped 6422 bytes  [1.036]

goos: darwin
goarch: amd64
pkg: github.com/tigrannajaryan/exp-otelproto/encodings
BenchmarkEncode/Baseline/MetricOne-8         	      27	 207923054 ns/op
BenchmarkEncode/Proposed/MetricOne-8         	      44	 133984867 ns/op

BenchmarkEncode/Baseline/MetricSeries-8      	       8	 649581262 ns/op
BenchmarkEncode/Proposed/MetricSeries-8      	      18	 324559562 ns/op

BenchmarkDecode/Baseline/MetricOne-8         	      15	 379468217 ns/op	186296043 B/op	 5274000 allocs/op
BenchmarkDecode/Proposed/MetricOne-8         	      21	 278470120 ns/op	155896034 B/op	 4474000 allocs/op

BenchmarkDecode/Baseline/MetricSeries-8      	       5	1041719362 ns/op	455096051 B/op	12174000 allocs/op
BenchmarkDecode/Proposed/MetricSeries-8      	       9	 603392754 ns/op	338296035 B/op	 8574000 allocs/op
```

It is 30-50% faster and is 20-25% smaller on the wire and in memory.

Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series
and one Histogram of doubles with 1 time series and single bucket. Each time series for
both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries).
Both metrics have 2 labels.

Benchmark source code is available at:
https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
SergeyKanzhelev pushed a commit that referenced this pull request Oct 31, 2019
* Split timeseries by data types in Metrics Protobuf definitions

Previously we had Point message which was a value of oneof the data types. This is
unnecessary flexibility because points in the same timeseries cannot be of different
data type. This also costed performance.

Now we have separate timeseries message definitions for each data type and the
timeseries used is defined by the oneof entry in the Metric message.

This change is stacked on top of #33

Simple benchmark in Go demonstrates the following improvement of encoding and decoding
compared to the baseline state:

```
===== Encoded sizes
Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricOne              24200 bytes  [1.000], gziped 1804 bytes  [1.000]
Proposed/MetricOne              19400 bytes  [1.247], gziped 1626 bytes  [1.109]

Encoding                       Uncompressed  Improved        Compressed  Improved
Baseline/MetricSeries           56022 bytes  [1.000], gziped 6655 bytes  [1.000]
Proposed/MetricSeries           43415 bytes  [1.290], gziped 6422 bytes  [1.036]

goos: darwin
goarch: amd64
pkg: github.com/tigrannajaryan/exp-otelproto/encodings
BenchmarkEncode/Baseline/MetricOne-8         	      27	 207923054 ns/op
BenchmarkEncode/Proposed/MetricOne-8         	      44	 133984867 ns/op

BenchmarkEncode/Baseline/MetricSeries-8      	       8	 649581262 ns/op
BenchmarkEncode/Proposed/MetricSeries-8      	      18	 324559562 ns/op

BenchmarkDecode/Baseline/MetricOne-8         	      15	 379468217 ns/op	186296043 B/op	 5274000 allocs/op
BenchmarkDecode/Proposed/MetricOne-8         	      21	 278470120 ns/op	155896034 B/op	 4474000 allocs/op

BenchmarkDecode/Baseline/MetricSeries-8      	       5	1041719362 ns/op	455096051 B/op	12174000 allocs/op
BenchmarkDecode/Proposed/MetricSeries-8      	       9	 603392754 ns/op	338296035 B/op	 8574000 allocs/op
```

It is 30-50% faster and is 20-25% smaller on the wire and in memory.

Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series
and one Histogram of doubles with 1 time series and single bucket. Each time series for
both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries).
Both metrics have 2 labels.

Benchmark source code is available at:
https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go

* Eliminate *TimeSeriesList messages
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Nov 8, 2019
The change to int64 is proposed by RFC0059 [1]. In this commit there is a small deviation from
the RFC: here we use sfixed64 instead of int64 to make it consistent with changes already
done for Metrics proto [2]. Experiments proved sfixed64 to be more suitable for timestamps.

[1] https://github.com/open-telemetry/oteps/blob/master/text/0059-otlp-trace-data-format.md
[2] open-telemetry#33
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Nov 8, 2019
The change to int64 is proposed by RFC0059 [1]. In this commit there is a small deviation from
the RFC: here we use sfixed64 instead of int64 to make it consistent with changes already
done for Metrics proto [2]. Experiments proved sfixed64 to be more suitable for timestamps.

[1] https://github.com/open-telemetry/oteps/blob/master/text/0059-otlp-trace-data-format.md
[2] open-telemetry#33
SergeyKanzhelev pushed a commit that referenced this pull request Nov 9, 2019
)

The change to int64 is proposed by RFC0059 [1]. In this commit there is a small deviation from
the RFC: here we use sfixed64 instead of int64 to make it consistent with changes already
done for Metrics proto [2]. Experiments proved sfixed64 to be more suitable for timestamps.

[1] https://github.com/open-telemetry/oteps/blob/master/text/0059-otlp-trace-data-format.md
[2] #33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants