-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change timestamp data type to unixnano in Metrics Protobuf definitions #33
Merged
bogdandrutu
merged 3 commits into
open-telemetry:master
from
tigrannajaryan:feature/tigran/metrictimestamp
Oct 30, 2019
Merged
Change timestamp data type to unixnano in Metrics Protobuf definitions #33
bogdandrutu
merged 3 commits into
open-telemetry:master
from
tigrannajaryan:feature/tigran/metrictimestamp
Oct 30, 2019
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This change applies the refinement approach that is already performed on Traces Protobuf definitions as part of open-telemetry/oteps#59 and which proved to yield significant performance improvements. I replaced google.protobuf.Timestamp by int64 time in unix epoch nanoseconds. Simple benchmark in Go demonstrates the following improvement of encoding and decoding compared to the current state: ``` ===== Encoded sizes Encoding Uncompressed Improved Compressed Improved Baseline/MetricOne 20000 bytes [1.000], gziped 1506 bytes [1.000] Proposed/MetricOne 18250 bytes [1.096], gziped 1433 bytes [1.051] Encoding Uncompressed Improved Compressed Improved Baseline/MetricSeries 51797 bytes [1.000], gziped 6455 bytes [1.000] Proposed/MetricSeries 43047 bytes [1.203], gziped 6093 bytes [1.059] goos: darwin goarch: amd64 pkg: github.com/tigrannajaryan/exp-otelproto/encodings BenchmarkEncode/Baseline/MetricOne-8 30 186998840 ns/op BenchmarkEncode/Proposed/MetricOne-8 36 166668705 ns/op BenchmarkEncode/Baseline/MetricSeries-8 8 632391842 ns/op BenchmarkEncode/Proposed/MetricSeries-8 10 537384515 ns/op BenchmarkDecode/Baseline/MetricOne-8 16 348156010 ns/op 171896049 B/op 4974000 allocs/op BenchmarkDecode/Proposed/MetricOne-8 19 314727259 ns/op 155096036 B/op 4624000 allocs/op BenchmarkDecode/Baseline/MetricSeries-8 5 1013035422 ns/op 440696048 B/op 11874000 allocs/op BenchmarkDecode/Proposed/MetricSeries-8 6 846887981 ns/op 356696040 B/op 10124000 allocs/op ``` It is 10-15% faster and is 10-20% smaller on the wire and in memory. Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series and one Histogram of doubles with 1 time series and single bucket. Each time series for both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries). Both metrics have 2 labels. Benchmark source code is available at: https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
tigrannajaryan
requested review from
AloisReitbauer,
bogdandrutu,
c24t,
carlosalberto,
iredelmeier,
SergeyKanzhelev,
songy23,
tedsuo and
yurishkuro
as code owners
October 30, 2019 15:20
bogdandrutu
reviewed
Oct 30, 2019
jmacd
approved these changes
Oct 30, 2019
SergeyKanzhelev
approved these changes
Oct 30, 2019
bogdandrutu
approved these changes
Oct 30, 2019
Please merge. |
bogdandrutu
reviewed
Oct 30, 2019
@tigrannajaryan we do apply the same rules as we do for all the repos, only maintainers of the repo are allowed to merge, and in this case TC members are the maintainers. |
tigrannajaryan
pushed a commit
to tigrannajaryan/opentelemetry-proto
that referenced
this pull request
Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is unnecessary flexibility because points in the same timeseries cannot be of different data type. This also costed performance. Now we have separate timeseries message definitions for each data type and the timeseries used is defined by the oneof entry in the Metric message. This change is stacked on top of open-telemetry#33 Simple benchmark in Go demonstrates the following improvement of encoding and decoding compared to the baseline state: ``` ===== Encoded sizes Encoding Uncompressed Improved Compressed Improved Baseline/MetricOne 24200 bytes [1.000], gziped 1804 bytes [1.000] Proposed/MetricOne 19400 bytes [1.247], gziped 1626 bytes [1.109] Encoding Uncompressed Improved Compressed Improved Baseline/MetricSeries 56022 bytes [1.000], gziped 6655 bytes [1.000] Proposed/MetricSeries 43415 bytes [1.290], gziped 6422 bytes [1.036] goos: darwin goarch: amd64 pkg: github.com/tigrannajaryan/exp-otelproto/encodings BenchmarkEncode/Baseline/MetricOne-8 27 207923054 ns/op BenchmarkEncode/Proposed/MetricOne-8 44 133984867 ns/op BenchmarkEncode/Baseline/MetricSeries-8 8 649581262 ns/op BenchmarkEncode/Proposed/MetricSeries-8 18 324559562 ns/op BenchmarkDecode/Baseline/MetricOne-8 15 379468217 ns/op 186296043 B/op 5274000 allocs/op BenchmarkDecode/Proposed/MetricOne-8 21 278470120 ns/op 155896034 B/op 4474000 allocs/op BenchmarkDecode/Baseline/MetricSeries-8 5 1041719362 ns/op 455096051 B/op 12174000 allocs/op BenchmarkDecode/Proposed/MetricSeries-8 9 603392754 ns/op 338296035 B/op 8574000 allocs/op ``` It is 30-50% faster and is 20-25% smaller on the wire and in memory. Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series and one Histogram of doubles with 1 time series and single bucket. Each time series for both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries). Both metrics have 2 labels. Benchmark source code is available at: https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
tigrannajaryan
pushed a commit
to tigrannajaryan/opentelemetry-proto
that referenced
this pull request
Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is unnecessary flexibility because points in the same timeseries cannot be of different data type. This also costed performance. Now we have separate timeseries message definitions for each data type and the timeseries used is defined by the oneof entry in the Metric message. This change is stacked on top of open-telemetry#33 Simple benchmark in Go demonstrates the following improvement of encoding and decoding compared to the baseline state: ``` ===== Encoded sizes Encoding Uncompressed Improved Compressed Improved Baseline/MetricOne 24200 bytes [1.000], gziped 1804 bytes [1.000] Proposed/MetricOne 19400 bytes [1.247], gziped 1626 bytes [1.109] Encoding Uncompressed Improved Compressed Improved Baseline/MetricSeries 56022 bytes [1.000], gziped 6655 bytes [1.000] Proposed/MetricSeries 43415 bytes [1.290], gziped 6422 bytes [1.036] goos: darwin goarch: amd64 pkg: github.com/tigrannajaryan/exp-otelproto/encodings BenchmarkEncode/Baseline/MetricOne-8 27 207923054 ns/op BenchmarkEncode/Proposed/MetricOne-8 44 133984867 ns/op BenchmarkEncode/Baseline/MetricSeries-8 8 649581262 ns/op BenchmarkEncode/Proposed/MetricSeries-8 18 324559562 ns/op BenchmarkDecode/Baseline/MetricOne-8 15 379468217 ns/op 186296043 B/op 5274000 allocs/op BenchmarkDecode/Proposed/MetricOne-8 21 278470120 ns/op 155896034 B/op 4474000 allocs/op BenchmarkDecode/Baseline/MetricSeries-8 5 1041719362 ns/op 455096051 B/op 12174000 allocs/op BenchmarkDecode/Proposed/MetricSeries-8 9 603392754 ns/op 338296035 B/op 8574000 allocs/op ``` It is 30-50% faster and is 20-25% smaller on the wire and in memory. Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series and one Histogram of doubles with 1 time series and single bucket. Each time series for both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries). Both metrics have 2 labels. Benchmark source code is available at: https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
tigrannajaryan
pushed a commit
to tigrannajaryan/opentelemetry-proto
that referenced
this pull request
Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is unnecessary flexibility because points in the same timeseries cannot be of different data type. This also costed performance. Now we have separate timeseries message definitions for each data type and the timeseries used is defined by the oneof entry in the Metric message. This change is stacked on top of open-telemetry#33 Simple benchmark in Go demonstrates the following improvement of encoding and decoding compared to the baseline state: ``` ===== Encoded sizes Encoding Uncompressed Improved Compressed Improved Baseline/MetricOne 24200 bytes [1.000], gziped 1804 bytes [1.000] Proposed/MetricOne 19400 bytes [1.247], gziped 1626 bytes [1.109] Encoding Uncompressed Improved Compressed Improved Baseline/MetricSeries 56022 bytes [1.000], gziped 6655 bytes [1.000] Proposed/MetricSeries 43415 bytes [1.290], gziped 6422 bytes [1.036] goos: darwin goarch: amd64 pkg: github.com/tigrannajaryan/exp-otelproto/encodings BenchmarkEncode/Baseline/MetricOne-8 27 207923054 ns/op BenchmarkEncode/Proposed/MetricOne-8 44 133984867 ns/op BenchmarkEncode/Baseline/MetricSeries-8 8 649581262 ns/op BenchmarkEncode/Proposed/MetricSeries-8 18 324559562 ns/op BenchmarkDecode/Baseline/MetricOne-8 15 379468217 ns/op 186296043 B/op 5274000 allocs/op BenchmarkDecode/Proposed/MetricOne-8 21 278470120 ns/op 155896034 B/op 4474000 allocs/op BenchmarkDecode/Baseline/MetricSeries-8 5 1041719362 ns/op 455096051 B/op 12174000 allocs/op BenchmarkDecode/Proposed/MetricSeries-8 9 603392754 ns/op 338296035 B/op 8574000 allocs/op ``` It is 30-50% faster and is 20-25% smaller on the wire and in memory. Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series and one Histogram of doubles with 1 time series and single bucket. Each time series for both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries). Both metrics have 2 labels. Benchmark source code is available at: https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
tigrannajaryan
pushed a commit
to tigrannajaryan/opentelemetry-proto
that referenced
this pull request
Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is unnecessary flexibility because points in the same timeseries cannot be of different data type. This also costed performance. Now we have separate timeseries message definitions for each data type and the timeseries used is defined by the oneof entry in the Metric message. This change is stacked on top of open-telemetry#33 Simple benchmark in Go demonstrates the following improvement of encoding and decoding compared to the baseline state: ``` ===== Encoded sizes Encoding Uncompressed Improved Compressed Improved Baseline/MetricOne 24200 bytes [1.000], gziped 1804 bytes [1.000] Proposed/MetricOne 19400 bytes [1.247], gziped 1626 bytes [1.109] Encoding Uncompressed Improved Compressed Improved Baseline/MetricSeries 56022 bytes [1.000], gziped 6655 bytes [1.000] Proposed/MetricSeries 43415 bytes [1.290], gziped 6422 bytes [1.036] goos: darwin goarch: amd64 pkg: github.com/tigrannajaryan/exp-otelproto/encodings BenchmarkEncode/Baseline/MetricOne-8 27 207923054 ns/op BenchmarkEncode/Proposed/MetricOne-8 44 133984867 ns/op BenchmarkEncode/Baseline/MetricSeries-8 8 649581262 ns/op BenchmarkEncode/Proposed/MetricSeries-8 18 324559562 ns/op BenchmarkDecode/Baseline/MetricOne-8 15 379468217 ns/op 186296043 B/op 5274000 allocs/op BenchmarkDecode/Proposed/MetricOne-8 21 278470120 ns/op 155896034 B/op 4474000 allocs/op BenchmarkDecode/Baseline/MetricSeries-8 5 1041719362 ns/op 455096051 B/op 12174000 allocs/op BenchmarkDecode/Proposed/MetricSeries-8 9 603392754 ns/op 338296035 B/op 8574000 allocs/op ``` It is 30-50% faster and is 20-25% smaller on the wire and in memory. Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series and one Histogram of doubles with 1 time series and single bucket. Each time series for both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries). Both metrics have 2 labels. Benchmark source code is available at: https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
bogdandrutu
approved these changes
Oct 30, 2019
tigrannajaryan
pushed a commit
to tigrannajaryan/opentelemetry-proto
that referenced
this pull request
Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is unnecessary flexibility because points in the same timeseries cannot be of different data type. This also costed performance. Now we have separate timeseries message definitions for each data type and the timeseries used is defined by the oneof entry in the Metric message. This change is stacked on top of open-telemetry#33 Simple benchmark in Go demonstrates the following improvement of encoding and decoding compared to the baseline state: ``` ===== Encoded sizes Encoding Uncompressed Improved Compressed Improved Baseline/MetricOne 24200 bytes [1.000], gziped 1804 bytes [1.000] Proposed/MetricOne 19400 bytes [1.247], gziped 1626 bytes [1.109] Encoding Uncompressed Improved Compressed Improved Baseline/MetricSeries 56022 bytes [1.000], gziped 6655 bytes [1.000] Proposed/MetricSeries 43415 bytes [1.290], gziped 6422 bytes [1.036] goos: darwin goarch: amd64 pkg: github.com/tigrannajaryan/exp-otelproto/encodings BenchmarkEncode/Baseline/MetricOne-8 27 207923054 ns/op BenchmarkEncode/Proposed/MetricOne-8 44 133984867 ns/op BenchmarkEncode/Baseline/MetricSeries-8 8 649581262 ns/op BenchmarkEncode/Proposed/MetricSeries-8 18 324559562 ns/op BenchmarkDecode/Baseline/MetricOne-8 15 379468217 ns/op 186296043 B/op 5274000 allocs/op BenchmarkDecode/Proposed/MetricOne-8 21 278470120 ns/op 155896034 B/op 4474000 allocs/op BenchmarkDecode/Baseline/MetricSeries-8 5 1041719362 ns/op 455096051 B/op 12174000 allocs/op BenchmarkDecode/Proposed/MetricSeries-8 9 603392754 ns/op 338296035 B/op 8574000 allocs/op ``` It is 30-50% faster and is 20-25% smaller on the wire and in memory. Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series and one Histogram of doubles with 1 time series and single bucket. Each time series for both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries). Both metrics have 2 labels. Benchmark source code is available at: https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
tigrannajaryan
pushed a commit
to tigrannajaryan/opentelemetry-proto
that referenced
this pull request
Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is unnecessary flexibility because points in the same timeseries cannot be of different data type. This also costed performance. Now we have separate timeseries message definitions for each data type and the timeseries used is defined by the oneof entry in the Metric message. This change is stacked on top of open-telemetry#33 Simple benchmark in Go demonstrates the following improvement of encoding and decoding compared to the baseline state: ``` ===== Encoded sizes Encoding Uncompressed Improved Compressed Improved Baseline/MetricOne 24200 bytes [1.000], gziped 1804 bytes [1.000] Proposed/MetricOne 19400 bytes [1.247], gziped 1626 bytes [1.109] Encoding Uncompressed Improved Compressed Improved Baseline/MetricSeries 56022 bytes [1.000], gziped 6655 bytes [1.000] Proposed/MetricSeries 43415 bytes [1.290], gziped 6422 bytes [1.036] goos: darwin goarch: amd64 pkg: github.com/tigrannajaryan/exp-otelproto/encodings BenchmarkEncode/Baseline/MetricOne-8 27 207923054 ns/op BenchmarkEncode/Proposed/MetricOne-8 44 133984867 ns/op BenchmarkEncode/Baseline/MetricSeries-8 8 649581262 ns/op BenchmarkEncode/Proposed/MetricSeries-8 18 324559562 ns/op BenchmarkDecode/Baseline/MetricOne-8 15 379468217 ns/op 186296043 B/op 5274000 allocs/op BenchmarkDecode/Proposed/MetricOne-8 21 278470120 ns/op 155896034 B/op 4474000 allocs/op BenchmarkDecode/Baseline/MetricSeries-8 5 1041719362 ns/op 455096051 B/op 12174000 allocs/op BenchmarkDecode/Proposed/MetricSeries-8 9 603392754 ns/op 338296035 B/op 8574000 allocs/op ``` It is 30-50% faster and is 20-25% smaller on the wire and in memory. Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series and one Histogram of doubles with 1 time series and single bucket. Each time series for both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries). Both metrics have 2 labels. Benchmark source code is available at: https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
tigrannajaryan
pushed a commit
to tigrannajaryan/opentelemetry-proto
that referenced
this pull request
Oct 30, 2019
Previously we had Point message which was a value of oneof the data types. This is unnecessary flexibility because points in the same timeseries cannot be of different data type. This also costed performance. Now we have separate timeseries message definitions for each data type and the timeseries used is defined by the oneof entry in the Metric message. This change is stacked on top of open-telemetry#33 Simple benchmark in Go demonstrates the following improvement of encoding and decoding compared to the baseline state: ``` ===== Encoded sizes Encoding Uncompressed Improved Compressed Improved Baseline/MetricOne 24200 bytes [1.000], gziped 1804 bytes [1.000] Proposed/MetricOne 19400 bytes [1.247], gziped 1626 bytes [1.109] Encoding Uncompressed Improved Compressed Improved Baseline/MetricSeries 56022 bytes [1.000], gziped 6655 bytes [1.000] Proposed/MetricSeries 43415 bytes [1.290], gziped 6422 bytes [1.036] goos: darwin goarch: amd64 pkg: github.com/tigrannajaryan/exp-otelproto/encodings BenchmarkEncode/Baseline/MetricOne-8 27 207923054 ns/op BenchmarkEncode/Proposed/MetricOne-8 44 133984867 ns/op BenchmarkEncode/Baseline/MetricSeries-8 8 649581262 ns/op BenchmarkEncode/Proposed/MetricSeries-8 18 324559562 ns/op BenchmarkDecode/Baseline/MetricOne-8 15 379468217 ns/op 186296043 B/op 5274000 allocs/op BenchmarkDecode/Proposed/MetricOne-8 21 278470120 ns/op 155896034 B/op 4474000 allocs/op BenchmarkDecode/Baseline/MetricSeries-8 5 1041719362 ns/op 455096051 B/op 12174000 allocs/op BenchmarkDecode/Proposed/MetricSeries-8 9 603392754 ns/op 338296035 B/op 8574000 allocs/op ``` It is 30-50% faster and is 20-25% smaller on the wire and in memory. Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series and one Histogram of doubles with 1 time series and single bucket. Each time series for both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries). Both metrics have 2 labels. Benchmark source code is available at: https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go
SergeyKanzhelev
pushed a commit
that referenced
this pull request
Oct 31, 2019
* Split timeseries by data types in Metrics Protobuf definitions Previously we had Point message which was a value of oneof the data types. This is unnecessary flexibility because points in the same timeseries cannot be of different data type. This also costed performance. Now we have separate timeseries message definitions for each data type and the timeseries used is defined by the oneof entry in the Metric message. This change is stacked on top of #33 Simple benchmark in Go demonstrates the following improvement of encoding and decoding compared to the baseline state: ``` ===== Encoded sizes Encoding Uncompressed Improved Compressed Improved Baseline/MetricOne 24200 bytes [1.000], gziped 1804 bytes [1.000] Proposed/MetricOne 19400 bytes [1.247], gziped 1626 bytes [1.109] Encoding Uncompressed Improved Compressed Improved Baseline/MetricSeries 56022 bytes [1.000], gziped 6655 bytes [1.000] Proposed/MetricSeries 43415 bytes [1.290], gziped 6422 bytes [1.036] goos: darwin goarch: amd64 pkg: github.com/tigrannajaryan/exp-otelproto/encodings BenchmarkEncode/Baseline/MetricOne-8 27 207923054 ns/op BenchmarkEncode/Proposed/MetricOne-8 44 133984867 ns/op BenchmarkEncode/Baseline/MetricSeries-8 8 649581262 ns/op BenchmarkEncode/Proposed/MetricSeries-8 18 324559562 ns/op BenchmarkDecode/Baseline/MetricOne-8 15 379468217 ns/op 186296043 B/op 5274000 allocs/op BenchmarkDecode/Proposed/MetricOne-8 21 278470120 ns/op 155896034 B/op 4474000 allocs/op BenchmarkDecode/Baseline/MetricSeries-8 5 1041719362 ns/op 455096051 B/op 12174000 allocs/op BenchmarkDecode/Proposed/MetricSeries-8 9 603392754 ns/op 338296035 B/op 8574000 allocs/op ``` It is 30-50% faster and is 20-25% smaller on the wire and in memory. Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series and one Histogram of doubles with 1 time series and single bucket. Each time series for both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries). Both metrics have 2 labels. Benchmark source code is available at: https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go * Eliminate *TimeSeriesList messages
tigrannajaryan
pushed a commit
to tigrannajaryan/opentelemetry-proto
that referenced
this pull request
Nov 8, 2019
The change to int64 is proposed by RFC0059 [1]. In this commit there is a small deviation from the RFC: here we use sfixed64 instead of int64 to make it consistent with changes already done for Metrics proto [2]. Experiments proved sfixed64 to be more suitable for timestamps. [1] https://github.com/open-telemetry/oteps/blob/master/text/0059-otlp-trace-data-format.md [2] open-telemetry#33
tigrannajaryan
pushed a commit
to tigrannajaryan/opentelemetry-proto
that referenced
this pull request
Nov 8, 2019
The change to int64 is proposed by RFC0059 [1]. In this commit there is a small deviation from the RFC: here we use sfixed64 instead of int64 to make it consistent with changes already done for Metrics proto [2]. Experiments proved sfixed64 to be more suitable for timestamps. [1] https://github.com/open-telemetry/oteps/blob/master/text/0059-otlp-trace-data-format.md [2] open-telemetry#33
SergeyKanzhelev
pushed a commit
that referenced
this pull request
Nov 9, 2019
) The change to int64 is proposed by RFC0059 [1]. In this commit there is a small deviation from the RFC: here we use sfixed64 instead of int64 to make it consistent with changes already done for Metrics proto [2]. Experiments proved sfixed64 to be more suitable for timestamps. [1] https://github.com/open-telemetry/oteps/blob/master/text/0059-otlp-trace-data-format.md [2] #33
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This change applies the refinement approach that is already performed on Traces
Protobuf definitions as part of open-telemetry/oteps#59 and
which proved to yield significant performance improvements.
I replaced google.protobuf.Timestamp by int64 time in unix epoch nanoseconds.
Simple benchmark in Go demonstrates the following improvement of encoding and decoding
compared to the current state:
It is 10-15% faster and is 10-20% smaller on the wire and in memory.
Benchmarks encode and decode 500 batches of 2 metrics: one int64 Gauge with 5 time series
and one Histogram of doubles with 1 time series and single bucket. Each time series for
both metrics contains either 1 data point (MetricOne) or 5 data points (MetricSeries).
Both metrics have 2 labels.
Benchmark source code is available at:
https://github.com/tigrannajaryan/exp-otelproto/blob/master/encodings/encoding_test.go