diff --git a/CHANGELOG.md b/CHANGELOG.md index d9cc05be50d..560aff52ed5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -25,13 +25,13 @@ * [FEATURE] Continuous-test: now runable as a module with `mimir -target=continuous-test`. #7747 * [FEATURE] Store-gateway: Allow specific tenants to be enabled or disabled via `-store-gateway.enabled-tenants` or `-store-gateway.disabled-tenants` CLI flags or their corresponding YAML settings. #7653 * [FEATURE] New `-.s3.bucket-lookup-type` flag configures lookup style type, used to access bucket in s3 compatible providers. #7684 -* [FEATURE] Querier: add experimental streaming PromQL engine, enabled with `-querier.promql-engine=mimir`. #7693 #7898 #7899 #8023 #8058 #8096 #8121 #8197 #8230 #8247 #8270 #8276 #8277 #8291 #8303 #8340 #8256 #8348 #8454 +* [FEATURE] Querier: add experimental streaming PromQL engine, enabled with `-querier.promql-engine=mimir`. #7693 #7898 #7899 #8023 #8058 #8096 #8121 #8197 #8230 #8247 #8270 #8276 #8277 #8291 #8303 #8340 #8256 #8348 #8422 #8430 #8454 * [FEATURE] New `/ingester/unregister-on-shutdown` HTTP endpoint allows dynamic access to ingesters' `-ingester.ring.unregister-on-shutdown` configuration. #7739 * [FEATURE] Server: added experimental [PROXY protocol support](https://www.haproxy.org/download/2.3/doc/proxy-protocol.txt). The PROXY protocol support can be enabled via `-server.proxy-protocol-enabled=true`. When enabled, the support is added both to HTTP and gRPC listening ports. #7698 * [FEATURE] mimirtool: Add `runtime-config verify` sub-command, for verifying Mimir runtime config files. #8123 * [FEATURE] Query-frontend, querier: new experimental `/cardinality/active_native_histogram_metrics` API to get active native histogram metric names with statistics about active native histogram buckets. #7982 #7986 #8008 * [FEATURE] Alertmanager: Added `-alertmanager.max-silences-count` and `-alertmanager.max-silence-size-bytes` to set limits on per tenant silences. Disabled by default. #6898 -* [FEATURE] Ingester: add experimental support for the server-side circuit breakers when writing to and reading from ingesters. This can be enabled using `-ingester.push-circuit-breaker.enabled` and `-ingester.read-circuit-breaker.enabled` options. Further `-ingester.push-circuit-breaker.*` and `-ingester.read-circuit-breaker.*` options for configuring circuit-breaker are available. Added metrics `cortex_ingester_circuit_breaker_results_total`, `cortex_ingester_circuit_breaker_transitions_total` and `cortex_ingester_circuit_breaker_current_state`. #8180 #8285 #8315 +* [FEATURE] Ingester: add experimental support for the server-side circuit breakers when writing to and reading from ingesters. This can be enabled using `-ingester.push-circuit-breaker.enabled` and `-ingester.read-circuit-breaker.enabled` options. Further `-ingester.push-circuit-breaker.*` and `-ingester.read-circuit-breaker.*` options for configuring circuit-breaker are available. Added metrics `cortex_ingester_circuit_breaker_results_total`, `cortex_ingester_circuit_breaker_transitions_total`, `cortex_ingester_circuit_breaker_current_state` and `cortex_ingester_circuit_breaker_request_timeouts_total`. #8180 #8285 #8315 #8446 * [FEATURE] Distributor, ingester: add new setting `-validation.past-grace-period` to limit how old (based on the wall clock minus OOO window) the ingested samples can be. The default 0 value disables this limit. #8262 * [ENHANCEMENT] Distributor: add metrics `cortex_distributor_samples_per_request` and `cortex_distributor_exemplars_per_request` to track samples/exemplars per request. #8265 * [ENHANCEMENT] Reduced memory allocations in functions used to propagate contextual information between gRPC calls. #7529 @@ -48,6 +48,8 @@ * [ENHANCEMENT] Expose TLS configuration for the S3 backend client. #2652 * [ENHANCEMENT] Rules: Support expansion of native histogram values when using rule templates #7974 * [ENHANCEMENT] Rules: Add metric `cortex_prometheus_rule_group_last_restore_duration_seconds` which measures how long it takes to restore rule groups using the `ALERTS_FOR_STATE` series #7974 +* [ENHANCEMENT] Rules: Added per namespace max rules per rule group limit. The maximum number of rules per rule groups for all namespaces continues to be configured by `-ruler.max-rules-per-rule-group`, but now, this can be superseded by the new `-ruler.max-rules-per-rule-group-by-namespace` option on a per namespace basis. This new limit can be overridden using the overrides mechanism to be applied per-tenant. #8378 +* [ENHANCEMENT] Rules: Added per namespace max rule groups per tenant limit. The maximum number of rule groups per rule tenant for all namespaces continues to be configured by `-ruler.max-rule-groups-per-tenant`, but now, this can be superseded by the new `-ruler.max-rule-groups-per-tenant-by-namespace` option on a per namespace basis. This new limit can be overridden using the overrides mechanism to be applied per-tenant. #8425 * [ENHANCEMENT] OTLP: Improve remote write format translation performance by using label set hashes for metric identifiers instead of string based ones. #8012 * [ENHANCEMENT] Querying: Remove OpEmptyMatch from regex concatenations. #8012 * [ENHANCEMENT] Store-gateway: add `-blocks-storage.bucket-store.max-concurrent-queue-timeout`. When set, queries at the store-gateway's query gate will not wait longer than that to execute. If a query reaches the wait timeout, then the querier will retry the blocks on a different store-gateway. If all store-gateways are unavailable, then the query will fail with `err-mimir-store-consistency-check-failed`. #7777 #8149 @@ -66,6 +68,8 @@ * [ENHANCEMENT] Query-frontend: log the start, end time and matchers for remote read requests to the query stats logs. #8326 #8370 #8373 * [ENHANCEMENT] Query-frontend: be able to block remote read queries via the per tenant runtime override `blocked_queries`. #8372 #8415 * [ENHANCEMENT] Query-frontend: added `remote_read` to `op` supported label values for the `cortex_query_frontend_queries_total` metric. #8412 +* [ENHANCEMENT] Query-frontend: log the overall length and start, end time offset from current time for remote read requests. The start and end times are calculated as the miminum and maximum times of the individual queries in the remote read request. #8404 +* [ENHANCEMENT] Storage Provider: Added option `-.s3.dualstack-enabled` that allows disabling S3 client from resolving AWS S3 endpoint into dual-stack IPv4/IPv6 endpoint. Defaults to true. #8405 * [BUGFIX] Distributor: prometheus retry on 5xx and 429 errors, while otlp collector only retry on 429, 502, 503 and 504, mapping other 5xx errors to the retryable ones in otlp endpoint. #8324 #8339 * [BUGFIX] Distributor: make OTLP endpoint return marshalled proto bytes as response body for 4xx/5xx errors. #8227 * [BUGFIX] Rules: improve error handling when querier is local to the ruler. #7567 @@ -186,6 +190,7 @@ * [ENHANCEMENT] Clarify Compactor and its storage volume when configured under Kubernetes. #7675 * [ENHANCEMENT] Add OTLP route to _Mimir routes by path_ runbooks section. #8074 * [ENHANCEMENT] Document option server.log-source-ips-full. #8268 +* [ENHANCEMENT] Specify in which component the configuration flags `-compactor.blocks-retention-period`, `-querier.max-query-lookback`, `-query-frontend.max-total-query-length`, `-query-frontend.max-query-expression-size-bytes` are applied and that they are applied to remote read as well. #8433 ### Tools diff --git a/CODEOWNERS b/CODEOWNERS index de0cfdc636b..70e1265d926 100644 --- a/CODEOWNERS +++ b/CODEOWNERS @@ -7,7 +7,7 @@ /.github/workflows/publish-technical-documentation-release-mimir.yml @grafana/docs-tooling @grafana/mimir-maintainers /.github/workflows/update-make-docs.yml @grafana/docs-tooling @grafana/mimir-maintainers -/docs/ @jdbaldry @grafana/mimir-maintainers +/docs/ @tacole02 @grafana/mimir-maintainers /docs/docs.mk @grafana/docs-tooling @grafana/mimir-maintainers /docs/internal/ @grafana/mimir-maintainers /docs/make-docs @grafana/docs-tooling @grafana/mimir-maintainers diff --git a/cmd/mimir/config-descriptor.json b/cmd/mimir/config-descriptor.json index 230e0375adc..666ffc13466 100644 --- a/cmd/mimir/config-descriptor.json +++ b/cmd/mimir/config-descriptor.json @@ -3756,7 +3756,7 @@ "kind": "field", "name": "max_query_lookback", "required": false, - "desc": "Limit how long back data (series and metadata) can be queried, up until \u003clookback\u003e duration ago. This limit is enforced in the query-frontend, querier and ruler. If the requested time range is outside the allowed range, the request will not fail but will be manipulated to only query data within the allowed time range. 0 to disable.", + "desc": "Limit how long back data (series and metadata) can be queried, up until \u003clookback\u003e duration ago. This limit is enforced in the query-frontend, querier and ruler for instant, range and remote read queries. For metadata queries like series, label names, label values queries the limit is enforced in the querier and ruler. If the requested time range is outside the allowed range, the request will not fail but will be manipulated to only query data within the allowed time range. 0 to disable.", "fieldValue": null, "fieldDefaultValue": 0, "fieldFlag": "querier.max-query-lookback", @@ -3869,7 +3869,7 @@ "kind": "field", "name": "max_total_query_length", "required": false, - "desc": "Limit the total query time range (end - start time). This limit is enforced in the query-frontend on the received query.", + "desc": "Limit the total query time range (end - start time). This limit is enforced in the query-frontend on the received instant, range or remote read query.", "fieldValue": null, "fieldDefaultValue": 0, "fieldFlag": "query-frontend.max-total-query-length", @@ -3930,7 +3930,7 @@ "kind": "field", "name": "max_query_expression_size_bytes", "required": false, - "desc": "Max size of the raw query, in bytes. 0 to not apply a limit to the size of the query.", + "desc": "Max size of the raw query, in bytes. This limit is enforced by the query-frontend for instant, range and remote read queries. 0 to not apply a limit to the size of the query.", "fieldValue": null, "fieldDefaultValue": 0, "fieldFlag": "query-frontend.max-query-expression-size-bytes", @@ -4068,6 +4068,28 @@ "fieldType": "boolean", "fieldCategory": "advanced" }, + { + "kind": "field", + "name": "ruler_max_rules_per_rule_group_by_namespace", + "required": false, + "desc": "Maximum number of rules per rule group by namespace. Value is a map, where each key is the namespace and value is the number of rules allowed in the namespace (int). On the command line, this map is given in a JSON format. The number of rules specified has the same meaning as -ruler.max-rules-per-rule-group, but only applies for the specific namespace. If specified, it supersedes -ruler.max-rules-per-rule-group.", + "fieldValue": null, + "fieldDefaultValue": {}, + "fieldFlag": "ruler.max-rules-per-rule-group-by-namespace", + "fieldType": "map of string to int", + "fieldCategory": "experimental" + }, + { + "kind": "field", + "name": "ruler_max_rule_groups_per_tenant_by_namespace", + "required": false, + "desc": "Maximum number of rule groups per tenant by namespace. Value is a map, where each key is the namespace and value is the number of rule groups allowed in the namespace (int). On the command line, this map is given in a JSON format. The number of rule groups specified has the same meaning as -ruler.max-rule-groups-per-tenant, but only applies for the specific namespace. If specified, it supersedes -ruler.max-rule-groups-per-tenant.", + "fieldValue": null, + "fieldDefaultValue": {}, + "fieldFlag": "ruler.max-rule-groups-per-tenant-by-namespace", + "fieldType": "map of string to int", + "fieldCategory": "experimental" + }, { "kind": "field", "name": "store_gateway_tenant_shard_size", @@ -4082,7 +4104,7 @@ "kind": "field", "name": "compactor_blocks_retention_period", "required": false, - "desc": "Delete blocks containing samples older than the specified retention period. Also used by query-frontend to avoid querying beyond the retention period. 0 to disable.", + "desc": "Delete blocks containing samples older than the specified retention period. Also used by query-frontend to avoid querying beyond the retention period by instant, range or remote read queries. 0 to disable.", "fieldValue": null, "fieldDefaultValue": 0, "fieldFlag": "compactor.blocks-retention-period", @@ -6136,6 +6158,17 @@ "fieldType": "string", "fieldCategory": "advanced" }, + { + "kind": "field", + "name": "dualstack_enabled", + "required": false, + "desc": "When enabled, direct all AWS S3 requests to the dual-stack IPv4/IPv6 endpoint for the configured region.", + "fieldValue": null, + "fieldDefaultValue": true, + "fieldFlag": "blocks-storage.s3.dualstack-enabled", + "fieldType": "boolean", + "fieldCategory": "experimental" + }, { "kind": "field", "name": "storage_class", @@ -12086,6 +12119,17 @@ "fieldType": "string", "fieldCategory": "advanced" }, + { + "kind": "field", + "name": "dualstack_enabled", + "required": false, + "desc": "When enabled, direct all AWS S3 requests to the dual-stack IPv4/IPv6 endpoint for the configured region.", + "fieldValue": null, + "fieldDefaultValue": true, + "fieldFlag": "ruler-storage.s3.dualstack-enabled", + "fieldType": "boolean", + "fieldCategory": "experimental" + }, { "kind": "field", "name": "storage_class", @@ -14215,6 +14259,17 @@ "fieldType": "string", "fieldCategory": "advanced" }, + { + "kind": "field", + "name": "dualstack_enabled", + "required": false, + "desc": "When enabled, direct all AWS S3 requests to the dual-stack IPv4/IPv6 endpoint for the configured region.", + "fieldValue": null, + "fieldDefaultValue": true, + "fieldFlag": "alertmanager-storage.s3.dualstack-enabled", + "fieldType": "boolean", + "fieldCategory": "experimental" + }, { "kind": "field", "name": "storage_class", @@ -16577,6 +16632,17 @@ "fieldType": "string", "fieldCategory": "advanced" }, + { + "kind": "field", + "name": "dualstack_enabled", + "required": false, + "desc": "When enabled, direct all AWS S3 requests to the dual-stack IPv4/IPv6 endpoint for the configured region.", + "fieldValue": null, + "fieldDefaultValue": true, + "fieldFlag": "common.storage.s3.dualstack-enabled", + "fieldType": "boolean", + "fieldCategory": "experimental" + }, { "kind": "field", "name": "storage_class", diff --git a/cmd/mimir/help-all.txt.tmpl b/cmd/mimir/help-all.txt.tmpl index d4cbc603a3d..3a52513a9f3 100644 --- a/cmd/mimir/help-all.txt.tmpl +++ b/cmd/mimir/help-all.txt.tmpl @@ -33,6 +33,8 @@ Usage of ./cmd/mimir/mimir: Bucket lookup style type, used to access bucket in S3-compatible service. Default is auto. Supported values are: auto, path, virtual-hosted. -alertmanager-storage.s3.bucket-name string S3 bucket name + -alertmanager-storage.s3.dualstack-enabled + [experimental] When enabled, direct all AWS S3 requests to the dual-stack IPv4/IPv6 endpoint for the configured region. (default true) -alertmanager-storage.s3.endpoint string The S3 bucket endpoint. It could be an AWS S3 endpoint listed at https://docs.aws.amazon.com/general/latest/gr/s3.html or the address of an S3-compatible service in hostname:port format. -alertmanager-storage.s3.expect-continue-timeout duration @@ -691,6 +693,8 @@ Usage of ./cmd/mimir/mimir: Bucket lookup style type, used to access bucket in S3-compatible service. Default is auto. Supported values are: auto, path, virtual-hosted. -blocks-storage.s3.bucket-name string S3 bucket name + -blocks-storage.s3.dualstack-enabled + [experimental] When enabled, direct all AWS S3 requests to the dual-stack IPv4/IPv6 endpoint for the configured region. (default true) -blocks-storage.s3.endpoint string The S3 bucket endpoint. It could be an AWS S3 endpoint listed at https://docs.aws.amazon.com/general/latest/gr/s3.html or the address of an S3-compatible service in hostname:port format. -blocks-storage.s3.expect-continue-timeout duration @@ -869,6 +873,8 @@ Usage of ./cmd/mimir/mimir: Bucket lookup style type, used to access bucket in S3-compatible service. Default is auto. Supported values are: auto, path, virtual-hosted. -common.storage.s3.bucket-name string S3 bucket name + -common.storage.s3.dualstack-enabled + [experimental] When enabled, direct all AWS S3 requests to the dual-stack IPv4/IPv6 endpoint for the configured region. (default true) -common.storage.s3.endpoint string The S3 bucket endpoint. It could be an AWS S3 endpoint listed at https://docs.aws.amazon.com/general/latest/gr/s3.html or the address of an S3-compatible service in hostname:port format. -common.storage.s3.expect-continue-timeout duration @@ -970,7 +976,7 @@ Usage of ./cmd/mimir/mimir: -compactor.block-upload-verify-chunks Verify chunks when uploading blocks via the upload API for the tenant. (default true) -compactor.blocks-retention-period duration - Delete blocks containing samples older than the specified retention period. Also used by query-frontend to avoid querying beyond the retention period. 0 to disable. + Delete blocks containing samples older than the specified retention period. Also used by query-frontend to avoid querying beyond the retention period by instant, range or remote read queries. 0 to disable. -compactor.cleanup-concurrency int Max number of tenants for which blocks cleanup and maintenance should run concurrently. (default 20) -compactor.cleanup-interval duration @@ -1784,7 +1790,7 @@ Usage of ./cmd/mimir/mimir: -querier.max-query-into-future duration [deprecated] Maximum duration into the future you can query. 0 to disable. (default 10m0s) -querier.max-query-lookback duration - Limit how long back data (series and metadata) can be queried, up until duration ago. This limit is enforced in the query-frontend, querier and ruler. If the requested time range is outside the allowed range, the request will not fail but will be manipulated to only query data within the allowed time range. 0 to disable. + Limit how long back data (series and metadata) can be queried, up until duration ago. This limit is enforced in the query-frontend, querier and ruler for instant, range and remote read queries. For metadata queries like series, label names, label values queries the limit is enforced in the querier and ruler. If the requested time range is outside the allowed range, the request will not fail but will be manipulated to only query data within the allowed time range. 0 to disable. -querier.max-query-parallelism int Maximum number of split (by time) or partial (by shard) queries that will be scheduled in parallel by the query-frontend for a single input query. This limit is introduced to have a fairer query scheduling and avoid a single query over a large time range saturating all available queriers. (default 14) -querier.max-samples int @@ -1950,11 +1956,11 @@ Usage of ./cmd/mimir/mimir: -query-frontend.max-queriers-per-tenant int Maximum number of queriers that can handle requests for a single tenant. If set to 0 or value higher than number of available queriers, *all* queriers will handle requests for the tenant. Each frontend (or query-scheduler, if used) will select the same set of queriers for the same tenant (given that all queriers are connected to all frontends / query-schedulers). This option only works with queriers connecting to the query-frontend / query-scheduler, not when using downstream URL. -query-frontend.max-query-expression-size-bytes int - Max size of the raw query, in bytes. 0 to not apply a limit to the size of the query. + Max size of the raw query, in bytes. This limit is enforced by the query-frontend for instant, range and remote read queries. 0 to not apply a limit to the size of the query. -query-frontend.max-retries-per-request int Maximum number of retries for a single request; beyond this, the downstream error is returned. (default 5) -query-frontend.max-total-query-length duration - Limit the total query time range (end - start time). This limit is enforced in the query-frontend on the received query. + Limit the total query time range (end - start time). This limit is enforced in the query-frontend on the received instant, range or remote read query. -query-frontend.not-running-timeout duration Maximum time to wait for the query-frontend to become ready before rejecting requests received before the frontend was ready. 0 to disable (i.e. fail immediately if a request is received while the frontend is still starting up) (default 2s) -query-frontend.parallelize-shardable-queries @@ -2337,6 +2343,8 @@ Usage of ./cmd/mimir/mimir: Bucket lookup style type, used to access bucket in S3-compatible service. Default is auto. Supported values are: auto, path, virtual-hosted. -ruler-storage.s3.bucket-name string S3 bucket name + -ruler-storage.s3.dualstack-enabled + [experimental] When enabled, direct all AWS S3 requests to the dual-stack IPv4/IPv6 endpoint for the configured region. (default true) -ruler-storage.s3.endpoint string The S3 bucket endpoint. It could be an AWS S3 endpoint listed at https://docs.aws.amazon.com/general/latest/gr/s3.html or the address of an S3-compatible service in hostname:port format. -ruler-storage.s3.expect-continue-timeout duration @@ -2515,8 +2523,12 @@ Usage of ./cmd/mimir/mimir: Max time to tolerate outage for restoring "for" state of alert. (default 1h0m0s) -ruler.max-rule-groups-per-tenant int Maximum number of rule groups per-tenant. 0 to disable. (default 70) + -ruler.max-rule-groups-per-tenant-by-namespace value + Maximum number of rule groups per tenant by namespace. Value is a map, where each key is the namespace and value is the number of rule groups allowed in the namespace (int). On the command line, this map is given in a JSON format. The number of rule groups specified has the same meaning as -ruler.max-rule-groups-per-tenant, but only applies for the specific namespace. If specified, it supersedes -ruler.max-rule-groups-per-tenant. (default {}) -ruler.max-rules-per-rule-group int Maximum number of rules per rule group per-tenant. 0 to disable. (default 20) + -ruler.max-rules-per-rule-group-by-namespace value + Maximum number of rules per rule group by namespace. Value is a map, where each key is the namespace and value is the number of rules allowed in the namespace (int). On the command line, this map is given in a JSON format. The number of rules specified has the same meaning as -ruler.max-rules-per-rule-group, but only applies for the specific namespace. If specified, it supersedes -ruler.max-rules-per-rule-group. (default {}) -ruler.notification-queue-capacity int Capacity of the queue for notifications to be sent to the Alertmanager. (default 10000) -ruler.notification-timeout duration diff --git a/cmd/mimir/help.txt.tmpl b/cmd/mimir/help.txt.tmpl index 1107b2ce5f3..ddfc6eaadc8 100644 --- a/cmd/mimir/help.txt.tmpl +++ b/cmd/mimir/help.txt.tmpl @@ -312,7 +312,7 @@ Usage of ./cmd/mimir/mimir: -compactor.block-upload-verify-chunks Verify chunks when uploading blocks via the upload API for the tenant. (default true) -compactor.blocks-retention-period duration - Delete blocks containing samples older than the specified retention period. Also used by query-frontend to avoid querying beyond the retention period. 0 to disable. + Delete blocks containing samples older than the specified retention period. Also used by query-frontend to avoid querying beyond the retention period by instant, range or remote read queries. 0 to disable. -compactor.compactor-tenant-shard-size int Max number of compactors that can compact blocks for single tenant. 0 to disable the limit and use all compactors. -compactor.data-dir string @@ -468,7 +468,7 @@ Usage of ./cmd/mimir/mimir: -querier.max-partial-query-length duration Limit the time range for partial queries at the querier level. -querier.max-query-lookback duration - Limit how long back data (series and metadata) can be queried, up until duration ago. This limit is enforced in the query-frontend, querier and ruler. If the requested time range is outside the allowed range, the request will not fail but will be manipulated to only query data within the allowed time range. 0 to disable. + Limit how long back data (series and metadata) can be queried, up until duration ago. This limit is enforced in the query-frontend, querier and ruler for instant, range and remote read queries. For metadata queries like series, label names, label values queries the limit is enforced in the querier and ruler. If the requested time range is outside the allowed range, the request will not fail but will be manipulated to only query data within the allowed time range. 0 to disable. -querier.max-query-parallelism int Maximum number of split (by time) or partial (by shard) queries that will be scheduled in parallel by the query-frontend for a single input query. This limit is introduced to have a fairer query scheduling and avoid a single query over a large time range saturating all available queriers. (default 14) -querier.max-samples int @@ -486,9 +486,9 @@ Usage of ./cmd/mimir/mimir: -query-frontend.max-queriers-per-tenant int Maximum number of queriers that can handle requests for a single tenant. If set to 0 or value higher than number of available queriers, *all* queriers will handle requests for the tenant. Each frontend (or query-scheduler, if used) will select the same set of queriers for the same tenant (given that all queriers are connected to all frontends / query-schedulers). This option only works with queriers connecting to the query-frontend / query-scheduler, not when using downstream URL. -query-frontend.max-query-expression-size-bytes int - Max size of the raw query, in bytes. 0 to not apply a limit to the size of the query. + Max size of the raw query, in bytes. This limit is enforced by the query-frontend for instant, range and remote read queries. 0 to not apply a limit to the size of the query. -query-frontend.max-total-query-length duration - Limit the total query time range (end - start time). This limit is enforced in the query-frontend on the received query. + Limit the total query time range (end - start time). This limit is enforced in the query-frontend on the received instant, range or remote read query. -query-frontend.parallelize-shardable-queries True to enable query sharding. -query-frontend.query-result-response-format string @@ -645,8 +645,12 @@ Usage of ./cmd/mimir/mimir: URL of alerts return path. -ruler.max-rule-groups-per-tenant int Maximum number of rule groups per-tenant. 0 to disable. (default 70) + -ruler.max-rule-groups-per-tenant-by-namespace value + Maximum number of rule groups per tenant by namespace. Value is a map, where each key is the namespace and value is the number of rule groups allowed in the namespace (int). On the command line, this map is given in a JSON format. The number of rule groups specified has the same meaning as -ruler.max-rule-groups-per-tenant, but only applies for the specific namespace. If specified, it supersedes -ruler.max-rule-groups-per-tenant. (default {}) -ruler.max-rules-per-rule-group int Maximum number of rules per rule group per-tenant. 0 to disable. (default 20) + -ruler.max-rules-per-rule-group-by-namespace value + Maximum number of rules per rule group by namespace. Value is a map, where each key is the namespace and value is the number of rules allowed in the namespace (int). On the command line, this map is given in a JSON format. The number of rules specified has the same meaning as -ruler.max-rules-per-rule-group, but only applies for the specific namespace. If specified, it supersedes -ruler.max-rules-per-rule-group. (default {}) -ruler.query-frontend.address string GRPC listen address of the query-frontend(s). Must be a DNS address (prefixed with dns:///) to enable client side load balancing. -ruler.query-frontend.query-result-response-format string diff --git a/development/mimir-read-write-mode/config/prometheus.yaml b/development/mimir-read-write-mode/config/prometheus.yaml new file mode 100644 index 00000000000..2f5a5707ff7 --- /dev/null +++ b/development/mimir-read-write-mode/config/prometheus.yaml @@ -0,0 +1,9 @@ +# Configure Prometheus to read from Mimir, so that we can test Mimir remote read endpoint +# sending queries from Prometheus. +remote_read: + - name: Mimir + url: http://mimir-read-1:8080/prometheus/api/v1/read + remote_timeout: 10s + read_recent: true + headers: + X-Scope-OrgID: anonymous diff --git a/development/mimir-read-write-mode/docker-compose.jsonnet b/development/mimir-read-write-mode/docker-compose.jsonnet index e31592fb5cd..110bd501d5b 100644 --- a/development/mimir-read-write-mode/docker-compose.jsonnet +++ b/development/mimir-read-write-mode/docker-compose.jsonnet @@ -9,6 +9,7 @@ std.manifestYamlDoc({ self.grafana + self.grafana_agent + self.memcached + + self.prometheus + {}, write:: { @@ -119,6 +120,21 @@ std.manifestYamlDoc({ }, }, + prometheus:: { + prometheus: { + image: 'prom/prometheus:v2.53.0', + command: [ + '--config.file=/etc/prometheus/prometheus.yaml', + '--enable-feature=exemplar-storage', + '--enable-feature=native-histograms', + ], + volumes: [ + './config:/etc/prometheus', + ], + ports: ['9090:9090'], + }, + }, + // This function builds docker-compose declaration for Mimir service. local mimirService(serviceOptions) = { local defaultOptions = { diff --git a/development/mimir-read-write-mode/docker-compose.yml b/development/mimir-read-write-mode/docker-compose.yml index c30f22ca2e7..ceb21a6197a 100644 --- a/development/mimir-read-write-mode/docker-compose.yml +++ b/development/mimir-read-write-mode/docker-compose.yml @@ -185,4 +185,14 @@ - "8080:8080" "volumes": - "../common/config:/etc/nginx/templates" + "prometheus": + "command": + - "--config.file=/etc/prometheus/prometheus.yaml" + - "--enable-feature=exemplar-storage" + - "--enable-feature=native-histograms" + "image": "prom/prometheus:v2.53.0" + "ports": + - "9090:9090" + "volumes": + - "./config:/etc/prometheus" "version": "3.4" diff --git a/docs/sources/mimir/configure/about-versioning.md b/docs/sources/mimir/configure/about-versioning.md index 030ec40d344..17a7708abcb 100644 --- a/docs/sources/mimir/configure/about-versioning.md +++ b/docs/sources/mimir/configure/about-versioning.md @@ -56,6 +56,9 @@ The following features are currently experimental: - `-compactor.no-blocks-file-cleanup-enabled` - Ruler - Aligning of evaluation timestamp on interval (`align_evaluation_time_on_interval`) + - Allow defining limits on the maximum number of rules allowed in a rule group by namespace and the maximum number of rule groups by namespace. If set, this supersedes the `-ruler.max-rules-per-rule-group` and `-ruler.max-rule-groups-per-tenant` limits. + - `-ruler.max-rules-per-rule-group-by-namespace` + - `-ruler.max-rule-groups-per-tenant-by-namespace` - Distributor - Metrics relabeling - `-distributor.metric-relabeling-enabled` diff --git a/docs/sources/mimir/configure/configuration-parameters/index.md b/docs/sources/mimir/configure/configuration-parameters/index.md index 105da2d0207..9c591b7477f 100644 --- a/docs/sources/mimir/configure/configuration-parameters/index.md +++ b/docs/sources/mimir/configure/configuration-parameters/index.md @@ -3262,9 +3262,11 @@ The `limits` block configures default and per-tenant limits imposed by component # Limit how long back data (series and metadata) can be queried, up until # duration ago. This limit is enforced in the query-frontend, querier -# and ruler. If the requested time range is outside the allowed range, the -# request will not fail but will be manipulated to only query data within the -# allowed time range. 0 to disable. +# and ruler for instant, range and remote read queries. For metadata queries +# like series, label names, label values queries the limit is enforced in the +# querier and ruler. If the requested time range is outside the allowed range, +# the request will not fail but will be manipulated to only query data within +# the allowed time range. 0 to disable. # CLI flag: -querier.max-query-lookback [max_query_lookback: | default = 0s] @@ -3330,7 +3332,7 @@ The `limits` block configures default and per-tenant limits imposed by component [query_ingesters_within: | default = 13h] # Limit the total query time range (end - start time). This limit is enforced in -# the query-frontend on the received query. +# the query-frontend on the received instant, range or remote read query. # CLI flag: -query-frontend.max-total-query-length [max_total_query_length: | default = 0s] @@ -3362,8 +3364,9 @@ The `limits` block configures default and per-tenant limits imposed by component # CLI flag: -query-frontend.cache-unaligned-requests [cache_unaligned_requests: | default = false] -# Max size of the raw query, in bytes. 0 to not apply a limit to the size of the -# query. +# Max size of the raw query, in bytes. This limit is enforced by the +# query-frontend for instant, range and remote read queries. 0 to not apply a +# limit to the size of the query. # CLI flag: -query-frontend.max-query-expression-size-bytes [max_query_expression_size_bytes: | default = 0] @@ -3434,6 +3437,24 @@ The `limits` block configures default and per-tenant limits imposed by component # CLI flag: -ruler.sync-rules-on-changes-enabled [ruler_sync_rules_on_changes_enabled: | default = true] +# (experimental) Maximum number of rules per rule group by namespace. Value is a +# map, where each key is the namespace and value is the number of rules allowed +# in the namespace (int). On the command line, this map is given in a JSON +# format. The number of rules specified has the same meaning as +# -ruler.max-rules-per-rule-group, but only applies for the specific namespace. +# If specified, it supersedes -ruler.max-rules-per-rule-group. +# CLI flag: -ruler.max-rules-per-rule-group-by-namespace +[ruler_max_rules_per_rule_group_by_namespace: | default = {}] + +# (experimental) Maximum number of rule groups per tenant by namespace. Value is +# a map, where each key is the namespace and value is the number of rule groups +# allowed in the namespace (int). On the command line, this map is given in a +# JSON format. The number of rule groups specified has the same meaning as +# -ruler.max-rule-groups-per-tenant, but only applies for the specific +# namespace. If specified, it supersedes -ruler.max-rule-groups-per-tenant. +# CLI flag: -ruler.max-rule-groups-per-tenant-by-namespace +[ruler_max_rule_groups_per_tenant_by_namespace: | default = {}] + # The tenant's shard size, used when store-gateway sharding is enabled. Value of # 0 disables shuffle sharding for the tenant, that is all tenant blocks are # sharded across all store-gateway replicas. @@ -3441,8 +3462,8 @@ The `limits` block configures default and per-tenant limits imposed by component [store_gateway_tenant_shard_size: | default = 0] # Delete blocks containing samples older than the specified retention period. -# Also used by query-frontend to avoid querying beyond the retention period. 0 -# to disable. +# Also used by query-frontend to avoid querying beyond the retention period by +# instant, range or remote read queries. 0 to disable. # CLI flag: -compactor.blocks-retention-period [compactor_blocks_retention_period: | default = 0s] @@ -4775,6 +4796,11 @@ The s3_backend block configures the connection to Amazon S3 object storage backe # CLI flag: -.s3.bucket-lookup-type [bucket_lookup_type: | default = "auto"] +# (experimental) When enabled, direct all AWS S3 requests to the dual-stack +# IPv4/IPv6 endpoint for the configured region. +# CLI flag: -.s3.dualstack-enabled +[dualstack_enabled: | default = true] + # (experimental) The S3 storage class to use, not set by default. Details can be # found at https://aws.amazon.com/s3/storage-classes/. Supported values are: # STANDARD, REDUCED_REDUNDANCY, GLACIER, STANDARD_IA, ONEZONE_IA, diff --git a/docs/sources/mimir/configure/configure-spread-minimizing-tokens/index.md b/docs/sources/mimir/configure/configure-spread-minimizing-tokens/index.md index 31ca35e75f2..bf0d62429ac 100644 --- a/docs/sources/mimir/configure/configure-spread-minimizing-tokens/index.md +++ b/docs/sources/mimir/configure/configure-spread-minimizing-tokens/index.md @@ -15,9 +15,14 @@ Using this guide, you can configure Mimir's ingesters to use the _spread-minimiz The ingester time series replication should be [configured with enabled zone-awareness](https://grafana.com/docs/mimir/latest/configure/configure-zone-aware-replication/#configuring-ingester-time-series-replication). -{{% admonition type="note" %}}Spread-mimizing tokens are recommended if [shuffle sharding](https://grafana.com/docs/mimir/latest/configure/configure-shuffle-sharding/#ingesters-shuffle-sharding) is disabled in your ingesters, or, if shuffle-sharding is enabled, but most of the tenants of your system use all available ingesters. +{{% admonition type="note" %}}Spread-mimizing tokens are recommended if [shuffle-sharding](https://grafana.com/docs/mimir/latest/configure/configure-shuffle-sharding/#ingesters-shuffle-sharding) is disabled on the [write path](https://grafana.com/docs/mimir/latest/configure/configure-shuffle-sharding/#ingesters-write-path) of your ingesters, or, if it is enabled, but most of the tenants of your system use all available ingesters. {{% /admonition %}} +{{% admonition type="note" %}}In order to prevent incorrect query results, [shuffle-sharding](https://grafana.com/docs/mimir/latest/configure/configure-shuffle-sharding/#ingesters-shuffle-sharding) on the [read path](https://grafana.com/docs/mimir/latest/configure/configure-shuffle-sharding/#ingesters-read-path) of your ingesters must be disabled before migrating ingesters to the spread-minimizing tokens. Shuffle-sharding on ingester's read path can be re-enabled at least `-querier.query-store-after` time after the last ingester zone was migrated to the spread-minimizing tokens. +{{% /admonition %}} + +If ingesters are configured with a non-empty value of `-ingester.ring.tokens-file-path`, this is the file where ingesters store the tokens at shutdown and restore them at startup. Keep track of this value, because you need it in the last step. + For simplicity, let’s assume that there are three configured availability zones named `zone-a`, `zone-b`, and `zone-c`. Migration to the _spread-minimizing token generation strategy_ is a complex process performed zone by zone to prevent any data loss. ## Step 1: Disable write requests to ingesters from `zone-a` @@ -103,6 +108,16 @@ If everything went smoothly, you should see something like this: Repeat steps 1 to 4, replacing all the occurrences of `zone-a` with `zone-b`. -## Step 6: migrate ingesters from `zone-c` +## Step 6: Migrate ingesters from `zone-c` Repeat steps 1 to 4, replacing all the occurrences of `zone-a` with `zone-c`. + +## Step 7: Delete the old token files + +If, before the migration, you configured ingesters to store their tokens under `-ingester.ring.tokens-file-path`, you must delete these files after migrating all ingester zones to spread-minimizing tokens. + +For example, if an ingester pod called `ingester-zone-a` from a namespace called `mimir-prod` used to store its tokens in a file called `/data/tokens`, you can run the following command to delete the `/data/tokens` file: + +``` +kubectl -n mimir-prod exec ingester-zone-a-0 -- rm /data/tokens +``` diff --git a/go.mod b/go.mod index 616d530d15f..826dfba3c49 100644 --- a/go.mod +++ b/go.mod @@ -265,7 +265,7 @@ require ( ) // Using a fork of Prometheus with Mimir-specific changes. -replace github.com/prometheus/prometheus => github.com/grafana/mimir-prometheus v0.0.0-20240618115521-86ae072cdc80 +replace github.com/prometheus/prometheus => github.com/grafana/mimir-prometheus v0.0.0-20240620082736-3d8577bc0dfb // Replace memberlist with our fork which includes some fixes that haven't been // merged upstream yet: diff --git a/go.sum b/go.sum index 90dc2ee3800..2377e0cab3b 100644 --- a/go.sum +++ b/go.sum @@ -517,8 +517,8 @@ github.com/grafana/gomemcache v0.0.0-20240229205252-cd6a66d6fb56 h1:X8IKQ0wu40wp github.com/grafana/gomemcache v0.0.0-20240229205252-cd6a66d6fb56/go.mod h1:PGk3RjYHpxMM8HFPhKKo+vve3DdlPUELZLSDEFehPuU= github.com/grafana/memberlist v0.3.1-0.20220714140823-09ffed8adbbe h1:yIXAAbLswn7VNWBIvM71O2QsgfgW9fRXZNR0DXe6pDU= github.com/grafana/memberlist v0.3.1-0.20220714140823-09ffed8adbbe/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE= -github.com/grafana/mimir-prometheus v0.0.0-20240618115521-86ae072cdc80 h1:ZmX7/xId5rYSsQ88mz02/+6dGuxPSWtplNfrCG4ayNE= -github.com/grafana/mimir-prometheus v0.0.0-20240618115521-86ae072cdc80/go.mod h1:ZrBwbXc+KqeAQT4QXHKfi68+7vaOzoSdrkk90RRgdkE= +github.com/grafana/mimir-prometheus v0.0.0-20240620082736-3d8577bc0dfb h1:RRarhtGItcl/8m6u9xn1eggjzG56A/hBujQ2xfR7RSA= +github.com/grafana/mimir-prometheus v0.0.0-20240620082736-3d8577bc0dfb/go.mod h1:ZrBwbXc+KqeAQT4QXHKfi68+7vaOzoSdrkk90RRgdkE= github.com/grafana/opentracing-contrib-go-stdlib v0.0.0-20230509071955-f410e79da956 h1:em1oddjXL8c1tL0iFdtVtPloq2hRPen2MJQKoAWpxu0= github.com/grafana/opentracing-contrib-go-stdlib v0.0.0-20230509071955-f410e79da956/go.mod h1:qtI1ogk+2JhVPIXVc6q+NHziSmy2W5GbdQZFUHADCBU= github.com/grafana/prometheus-alertmanager v0.25.1-0.20240605141526-70d9d63f74fc h1:VoEf4wNiS3hCPxxmFdEvyeZJA3eI4Wb4gAlzYZwh52A= diff --git a/integration/backward_compatibility_test.go b/integration/backward_compatibility_test.go index e6fc1c60960..f6aa5dcb851 100644 --- a/integration/backward_compatibility_test.go +++ b/integration/backward_compatibility_test.go @@ -9,6 +9,7 @@ package integration import ( "encoding/json" "fmt" + "net/http" "os" "strings" "testing" @@ -92,8 +93,8 @@ func runBackwardCompatibilityTest(t *testing.T, previousImage string, oldFlagsMa // Push some series to Mimir. series1Timestamp := time.Now() series2Timestamp := series1Timestamp.Add(blockRangePeriod * 2) - series1, expectedVector1, _ := generateFloatSeries("series_1", series1Timestamp, prompb.Label{Name: "series_1", Value: "series_1"}) - series2, expectedVector2, _ := generateFloatSeries("series_2", series2Timestamp, prompb.Label{Name: "series_2", Value: "series_2"}) + series1, expectedVector1, _ := generateFloatSeries("series_1", series1Timestamp, prompb.Label{Name: "label_1", Value: "label_1"}) + series2, expectedVector2, _ := generateFloatSeries("series_2", series2Timestamp, prompb.Label{Name: "label_2", Value: "label_2"}) c, err := e2emimir.NewClient(distributor.HTTPEndpoint(), "", "", "", "user-1") require.NoError(t, err) @@ -114,7 +115,7 @@ func runBackwardCompatibilityTest(t *testing.T, previousImage string, oldFlagsMa // Push another series to further compact another block and delete the first block // due to expired retention. series3Timestamp := series2Timestamp.Add(blockRangePeriod * 2) - series3, expectedVector3, _ := generateFloatSeries("series_3", series3Timestamp, prompb.Label{Name: "series_3", Value: "series_3"}) + series3, expectedVector3, _ := generateFloatSeries("series_3", series3Timestamp, prompb.Label{Name: "label_3", Value: "label_3"}) res, err = c.Push(series3) require.NoError(t, err) @@ -139,19 +140,41 @@ func runBackwardCompatibilityTest(t *testing.T, previousImage string, oldFlagsMa compactor := e2emimir.NewCompactor("compactor", consul.NetworkHTTPEndpoint(), flags) require.NoError(t, s.StartAndWaitReady(compactor)) - checkQueries(t, consul, previousImage, flags, oldFlagsMapper, s, 1, instantQueryTest{ - expr: "series_1", - time: series1Timestamp, - expectedVector: expectedVector1, - }, instantQueryTest{ - expr: "series_2", - time: series2Timestamp, - expectedVector: expectedVector2, - }, instantQueryTest{ - expr: "series_3", - time: series3Timestamp, - expectedVector: expectedVector3, - }) + checkQueries(t, consul, previousImage, flags, oldFlagsMapper, s, 1, + []instantQueryTest{ + { + expr: "series_1", + time: series1Timestamp, + expectedVector: expectedVector1, + }, { + expr: "series_2", + time: series2Timestamp, + expectedVector: expectedVector2, + }, { + expr: "series_3", + time: series3Timestamp, + expectedVector: expectedVector3, + }, + }, + []remoteReadRequestTest{ + { + metricName: "series_1", + startTime: series1Timestamp.Add(-time.Minute), + endTime: series1Timestamp.Add(time.Minute), + expectedTimeseries: vectorToPrompbTimeseries(expectedVector1), + }, { + metricName: "series_2", + startTime: series2Timestamp.Add(-time.Minute), + endTime: series2Timestamp.Add(time.Minute), + expectedTimeseries: vectorToPrompbTimeseries(expectedVector2), + }, { + metricName: "series_3", + startTime: series3Timestamp.Add(-time.Minute), + endTime: series3Timestamp.Add(time.Minute), + expectedTimeseries: vectorToPrompbTimeseries(expectedVector3), + }, + }, + ) } // Check for issues like https://github.com/cortexproject/cortex/issues/2356 @@ -195,11 +218,11 @@ func runNewDistributorsCanPushToOldIngestersWithReplication(t *testing.T, previo require.NoError(t, err) require.Equal(t, 200, res.StatusCode) - checkQueries(t, consul, previousImage, flags, oldFlagsMapper, s, 3, instantQueryTest{ + checkQueries(t, consul, previousImage, flags, oldFlagsMapper, s, 3, []instantQueryTest{{ time: now, expr: "series_1", expectedVector: expectedVector, - }) + }}, nil) } func checkQueries( @@ -210,7 +233,8 @@ func checkQueries( oldFlagsMapper e2emimir.FlagMapper, s *e2e.Scenario, numIngesters int, - instantQueries ...instantQueryTest, + instantQueries []instantQueryTest, + remoteReadRequests []remoteReadRequestTest, ) { cases := map[string]struct { queryFrontendOptions []e2emimir.Option @@ -272,11 +296,21 @@ func checkQueries( require.NoError(t, err) for _, query := range instantQueries { - t.Run(fmt.Sprintf("%s: %s", endpoint, query.expr), func(t *testing.T) { + t.Run(fmt.Sprintf("%s: instant query: %s", endpoint, query.expr), func(t *testing.T) { result, err := c.Query(query.expr, query.time) require.NoError(t, err) require.Equal(t, model.ValVector, result.Type()) - assert.Equal(t, query.expectedVector, result.(model.Vector)) + require.Equal(t, query.expectedVector, result.(model.Vector)) + }) + } + + for _, req := range remoteReadRequests { + t.Run(fmt.Sprintf("%s: remote read: %s", endpoint, req.metricName), func(t *testing.T) { + httpRes, result, _, err := c.RemoteRead(req.metricName, req.startTime, req.endTime) + require.NoError(t, err) + require.Equal(t, http.StatusOK, httpRes.StatusCode) + require.NotNil(t, result) + require.Equal(t, req.expectedTimeseries, result.Timeseries) }) } } @@ -290,6 +324,13 @@ type instantQueryTest struct { expectedVector model.Vector } +type remoteReadRequestTest struct { + metricName string + startTime time.Time + endTime time.Time + expectedTimeseries []*prompb.TimeSeries +} + type testingLogger interface{ Logf(string, ...interface{}) } func previousImageVersionOverrides(t *testing.T) map[string]e2emimir.FlagMapper { diff --git a/integration/querier_test.go b/integration/querier_test.go index 8b2d52a92fd..43b674a6a2f 100644 --- a/integration/querier_test.go +++ b/integration/querier_test.go @@ -758,7 +758,7 @@ func testMetadataQueriesWithBlocksStorage( require.NoError(t, err) if st.ok { require.Equal(t, 1, len(seriesRes), st) - require.Equal(t, model.LabelSet(prompbLabelsToModelMetric(st.resp)), seriesRes[0], st) + require.Equal(t, model.LabelSet(prompbLabelsToMetric(st.resp)), seriesRes[0], st) } else { require.Equal(t, 0, len(seriesRes), st) } @@ -1025,7 +1025,7 @@ func TestHashCollisionHandling(t *testing.T) { }, }) expectedVector = append(expectedVector, &model.Sample{ - Metric: prompbLabelsToModelMetric(metric1), + Metric: prompbLabelsToMetric(metric1), Value: model.SampleValue(float64(0)), Timestamp: model.Time(tsMillis), }) @@ -1036,7 +1036,7 @@ func TestHashCollisionHandling(t *testing.T) { }, }) expectedVector = append(expectedVector, &model.Sample{ - Metric: prompbLabelsToModelMetric(metric2), + Metric: prompbLabelsToMetric(metric2), Value: model.SampleValue(float64(1)), Timestamp: model.Time(tsMillis), }) @@ -1070,13 +1070,3 @@ func getMetricName(lbls []prompb.Label) string { panic(fmt.Sprintf("series %v has no metric name", lbls)) } - -func prompbLabelsToModelMetric(pbLabels []prompb.Label) model.Metric { - metric := model.Metric{} - - for _, l := range pbLabels { - metric[model.LabelName(l.Name)] = model.LabelValue(l.Value) - } - - return metric -} diff --git a/integration/util.go b/integration/util.go index c571968ad02..d6e30b922e7 100644 --- a/integration/util.go +++ b/integration/util.go @@ -12,6 +12,8 @@ import ( "os" "os/exec" "path/filepath" + "slices" + "strings" "time" "github.com/grafana/e2e" @@ -258,3 +260,54 @@ func GenerateNHistogramSeries(nSeries, nExemplars int, name func() string, ts ti } return } + +func prompbLabelsToMetric(pbLabels []prompb.Label) model.Metric { + metric := make(model.Metric, len(pbLabels)) + + for _, l := range pbLabels { + metric[model.LabelName(l.Name)] = model.LabelValue(l.Value) + } + + return metric +} + +func metricToPrompbLabels(metric model.Metric) []prompb.Label { + lbls := make([]prompb.Label, 0, len(metric)) + + for name, value := range metric { + lbls = append(lbls, prompb.Label{ + Name: string(name), + Value: string(value), + }) + } + + // Sort labels because they're expected to be sorted by contract. + slices.SortFunc(lbls, func(a, b prompb.Label) int { + cmp := strings.Compare(a.Name, b.Name) + if cmp != 0 { + return cmp + } + + return strings.Compare(a.Value, b.Value) + }) + + return lbls +} + +func vectorToPrompbTimeseries(vector model.Vector) []*prompb.TimeSeries { + res := make([]*prompb.TimeSeries, 0, len(vector)) + + for _, sample := range vector { + res = append(res, &prompb.TimeSeries{ + Labels: metricToPrompbLabels(sample.Metric), + Samples: []prompb.Sample{ + { + Value: float64(sample.Value), + Timestamp: int64(sample.Timestamp), + }, + }, + }) + } + + return res +} diff --git a/pkg/distributor/distributor_test.go b/pkg/distributor/distributor_test.go index 9bae65259ad..39484a429a0 100644 --- a/pkg/distributor/distributor_test.go +++ b/pkg/distributor/distributor_test.go @@ -6270,8 +6270,8 @@ func (i *mockIngester) QueryExemplars(ctx context.Context, req *client.ExemplarQ // Sort series by labels because the real ingester returns sorted ones. slices.SortFunc(res.Timeseries, func(a, b mimirpb.TimeSeries) int { - aKey := client.LabelsToKeyString(mimirpb.FromLabelAdaptersToLabels(a.Labels)) - bKey := client.LabelsToKeyString(mimirpb.FromLabelAdaptersToLabels(b.Labels)) + aKey := mimirpb.FromLabelAdaptersToKeyString(a.Labels) + bKey := mimirpb.FromLabelAdaptersToKeyString(b.Labels) return strings.Compare(aKey, bKey) }) diff --git a/pkg/distributor/query.go b/pkg/distributor/query.go index 4527ea475e5..86597a6adef 100644 --- a/pkg/distributor/query.go +++ b/pkg/distributor/query.go @@ -181,7 +181,7 @@ func mergeExemplarQueryResponses(results []*ingester_client.ExemplarQueryRespons exemplarResults := make(map[string]mimirpb.TimeSeries) for _, r := range results { for _, ts := range r.Timeseries { - lbls := ingester_client.LabelsToKeyString(mimirpb.FromLabelAdaptersToLabels(ts.Labels)) + lbls := mimirpb.FromLabelAdaptersToKeyString(ts.Labels) e, ok := exemplarResults[lbls] if !ok { exemplarResults[lbls] = ts @@ -374,7 +374,7 @@ func (d *Distributor) queryIngesterStream(ctx context.Context, replicationSets [ // Accumulate any chunk series for _, batch := range res.chunkseriesBatches { for _, series := range batch { - key := ingester_client.LabelsToKeyString(mimirpb.FromLabelAdaptersToLabels(series.Labels)) + key := mimirpb.FromLabelAdaptersToKeyString(series.Labels) existing := hashToChunkseries[key] existing.Labels = series.Labels @@ -390,7 +390,7 @@ func (d *Distributor) queryIngesterStream(ctx context.Context, replicationSets [ // Accumulate any time series for _, batch := range res.timeseriesBatches { for _, series := range batch { - key := ingester_client.LabelsToKeyString(mimirpb.FromLabelAdaptersToLabels(series.Labels)) + key := mimirpb.FromLabelAdaptersToKeyString(series.Labels) existing := hashToTimeSeries[key] existing.Labels = series.Labels if existing.Samples == nil { diff --git a/pkg/frontend/querymiddleware/cardinality.go b/pkg/frontend/querymiddleware/cardinality.go index 1584a67d9eb..eb6088aa9ae 100644 --- a/pkg/frontend/querymiddleware/cardinality.go +++ b/pkg/frontend/querymiddleware/cardinality.go @@ -76,7 +76,11 @@ func (c *cardinalityEstimation) Do(ctx context.Context, request MetricsQueryRequ estimatedCardinality, estimateAvailable := c.lookupCardinalityForKey(ctx, k) if estimateAvailable { - request = request.WithEstimatedSeriesCountHint(estimatedCardinality) + newRequest, err := request.WithEstimatedSeriesCountHint(estimatedCardinality) + if err != nil { + return c.next.Do(ctx, request) + } + request = newRequest spanLog.LogFields( otlog.Bool("estimate available", true), otlog.Uint64("estimated cardinality", estimatedCardinality), diff --git a/pkg/frontend/querymiddleware/codec.go b/pkg/frontend/querymiddleware/codec.go index aa71f8d31f3..48bdc5c9fd4 100644 --- a/pkg/frontend/querymiddleware/codec.go +++ b/pkg/frontend/querymiddleware/codec.go @@ -118,7 +118,7 @@ type MetricsQueryRequest interface { // These hints can be used to optimize the query execution. GetHints() *Hints // WithID clones the current request with the provided ID. - WithID(id int64) MetricsQueryRequest + WithID(id int64) (MetricsQueryRequest, error) // WithStartEnd clone the current request with different start and end timestamp. // Implementations must ensure minT and maxT are recalculated when the start and end timestamp change. WithStartEnd(startTime int64, endTime int64) (MetricsQueryRequest, error) @@ -126,14 +126,14 @@ type MetricsQueryRequest interface { // Implementations must ensure minT and maxT are recalculated when the query changes. WithQuery(string) (MetricsQueryRequest, error) // WithHeaders clones the current request with different headers. - WithHeaders([]*PrometheusHeader) MetricsQueryRequest + WithHeaders([]*PrometheusHeader) (MetricsQueryRequest, error) // WithExpr clones the current `PrometheusRangeQueryRequest` with a new query expression. // Implementations must ensure minT and maxT are recalculated when the query changes. - WithExpr(parser.Expr) MetricsQueryRequest + WithExpr(parser.Expr) (MetricsQueryRequest, error) // WithTotalQueriesHint adds the number of total queries to this request's Hints. - WithTotalQueriesHint(int32) MetricsQueryRequest + WithTotalQueriesHint(int32) (MetricsQueryRequest, error) // WithEstimatedSeriesCountHint WithEstimatedCardinalityHint adds a cardinality estimate to this request's Hints. - WithEstimatedSeriesCountHint(uint64) MetricsQueryRequest + WithEstimatedSeriesCountHint(uint64) (MetricsQueryRequest, error) // AddSpanTags writes information about this request to an OpenTracing span AddSpanTags(opentracing.Span) } @@ -801,7 +801,7 @@ func matrixMerge(resps []*PrometheusResponse) []SampleStream { continue } for _, stream := range resp.Data.Result { - metric := mimirpb.FromLabelAdaptersToLabels(stream.Labels).String() + metric := mimirpb.FromLabelAdaptersToKeyString(stream.Labels) existing, ok := output[metric] if !ok { existing = &SampleStream{ diff --git a/pkg/frontend/querymiddleware/codec_test.go b/pkg/frontend/querymiddleware/codec_test.go index bb9938c97a9..3b0679ccbe6 100644 --- a/pkg/frontend/querymiddleware/codec_test.go +++ b/pkg/frontend/querymiddleware/codec_test.go @@ -459,7 +459,7 @@ func TestMetricsQuery_WithQuery_WithExpr_TransformConsistency(t *testing.T) { // test WithExpr on the same query as WithQuery queryExpr, err := parser.ParseExpr(testCase.updatedQuery) - updatedMetricsQuery = testCase.initialMetricsQuery.WithExpr(queryExpr) + updatedMetricsQuery = mustSucceed(testCase.initialMetricsQuery.WithExpr(queryExpr)) if err != nil || testCase.expectedErr != nil { require.IsType(t, testCase.expectedErr, err) diff --git a/pkg/frontend/querymiddleware/limits_test.go b/pkg/frontend/querymiddleware/limits_test.go index cc2fa3e071d..24b56ad5d16 100644 --- a/pkg/frontend/querymiddleware/limits_test.go +++ b/pkg/frontend/querymiddleware/limits_test.go @@ -7,7 +7,6 @@ package querymiddleware import ( "context" - "errors" "fmt" "net/http" "strings" @@ -17,9 +16,9 @@ import ( "github.com/go-kit/log" "github.com/grafana/dskit/user" - "github.com/prometheus/prometheus/model/labels" "github.com/prometheus/prometheus/prompb" "github.com/prometheus/prometheus/promql/parser" + "github.com/prometheus/prometheus/storage/remote" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/mock" "github.com/stretchr/testify/require" @@ -29,11 +28,10 @@ import ( "github.com/grafana/mimir/pkg/util/validation" ) -func TestLimitsMiddleware_MaxQueryLookback(t *testing.T) { +func TestLimitsMiddleware_MaxQueryLookback_RangeQueryAndRemoteRead(t *testing.T) { const ( - fifteenDays = 15 * 24 * time.Hour - thirtyDays = 30 * 24 * time.Hour - sixtyDays = 60 * 24 * time.Hour + thirtyDays = 30 * 24 * time.Hour + sixtyDays = 60 * 24 * time.Hour ) now := time.Now() @@ -47,7 +45,7 @@ func TestLimitsMiddleware_MaxQueryLookback(t *testing.T) { expectedStartTime time.Time expectedEndTime time.Time }{ - "should not manipulate time range if max lookback is disabled": { + "should not manipulate time range if maxQueryLookback and blocksRetentionPeriod are both disabled": { maxQueryLookback: 0, blocksRetentionPeriod: 0, reqStartTime: time.Unix(0, 0), @@ -186,6 +184,103 @@ func TestLimitsMiddleware_MaxQueryLookback(t *testing.T) { } } +func TestLimitsMiddleware_MaxQueryLookback_InstantQuery(t *testing.T) { + const ( + thirtyDays = 30 * 24 * time.Hour + sixtyDays = 60 * 24 * time.Hour + ) + + now := time.Now() + + tests := map[string]struct { + maxQueryLookback time.Duration + blocksRetentionPeriod time.Duration + reqTime time.Time + expectedSkipped bool + expectedTime time.Time + }{ + "should allow executing a query if maxQueryLookback and blocksRetentionPeriod are both disabled": { + maxQueryLookback: 0, + blocksRetentionPeriod: 0, + reqTime: time.Unix(0, 0), + expectedTime: time.Unix(0, 0), + }, + "should allow executing a query with time within maxQueryLookback and blocksRetentionPeriod": { + maxQueryLookback: thirtyDays, + blocksRetentionPeriod: thirtyDays, + reqTime: now.Add(-time.Hour), + expectedTime: now.Add(-time.Hour), + }, + "should allow executing a query with time close to maxQueryLookback and blocksRetentionPeriod": { + maxQueryLookback: thirtyDays, + blocksRetentionPeriod: thirtyDays, + reqTime: now.Add(-thirtyDays).Add(time.Hour), + expectedTime: now.Add(-thirtyDays).Add(time.Hour), + }, + "should skip executing a query with time before the maxQueryLookback limit, and blocksRetentionPeriod is not set": { + maxQueryLookback: thirtyDays, + blocksRetentionPeriod: 0, + reqTime: now.Add(-thirtyDays).Add(-100 * time.Hour), + expectedSkipped: true, + }, + "should skip executing a query with time before the blocksRetentionPeriod, and maxQueryLookback limit is not set": { + maxQueryLookback: 0, + blocksRetentionPeriod: thirtyDays, + reqTime: now.Add(-thirtyDays).Add(-100 * time.Hour), + expectedSkipped: true, + }, + "should skip executing a query with time before the maxQueryLookback limit, and blocksRetentionPeriod is set to an higher value": { + maxQueryLookback: thirtyDays, + blocksRetentionPeriod: sixtyDays, + reqTime: now.Add(-thirtyDays).Add(-100 * time.Hour), + expectedSkipped: true, + }, + "should skip executing a query with time before the blocksRetentionPeriod, and maxQueryLookback limit is set to an higher value": { + maxQueryLookback: sixtyDays, + blocksRetentionPeriod: thirtyDays, + reqTime: now.Add(-thirtyDays).Add(-100 * time.Hour), + expectedSkipped: true, + }, + } + + for testName, testData := range tests { + t.Run(testName, func(t *testing.T) { + req := &PrometheusInstantQueryRequest{ + time: testData.reqTime.UnixMilli(), + } + + limits := mockLimits{maxQueryLookback: testData.maxQueryLookback, compactorBlocksRetentionPeriod: testData.blocksRetentionPeriod} + middleware := newLimitsMiddleware(limits, log.NewNopLogger()) + + innerRes := newEmptyPrometheusResponse() + inner := &mockHandler{} + inner.On("Do", mock.Anything, mock.Anything).Return(innerRes, nil) + + ctx := user.InjectOrgID(context.Background(), "test") + outer := middleware.Wrap(inner) + res, err := outer.Do(ctx, req) + require.NoError(t, err) + + if testData.expectedSkipped { + // We expect an empty response, but not the one returned by the inner handler + // which we expect has been skipped. + assert.NotSame(t, innerRes, res) + assert.Len(t, inner.Calls, 0) + } else { + // We expect the response returned by the inner handler. + assert.Same(t, innerRes, res) + + // Assert on the time range of the request passed to the inner handler (5s delta). + delta := float64(5000) + require.Len(t, inner.Calls, 1) + + assert.InDelta(t, util.TimeToMillis(testData.expectedTime), inner.Calls[0].Arguments.Get(1).(MetricsQueryRequest).GetStart(), delta) + assert.InDelta(t, util.TimeToMillis(testData.expectedTime), inner.Calls[0].Arguments.Get(1).(MetricsQueryRequest).GetEnd(), delta) + } + }) + } +} + func TestLimitsMiddleware_MaxQueryExpressionSizeBytes(t *testing.T) { now := time.Now() @@ -232,6 +327,9 @@ func TestLimitsMiddleware_MaxQueryExpressionSizeBytes(t *testing.T) { start: startMs, end: endMs, }, + "instant query": &PrometheusInstantQueryRequest{ + queryExpr: parseQuery(t, testData.query), + }, "remote read": &remoteReadQueryRequest{ path: remoteReadPathSuffix, promQuery: testData.query, @@ -246,7 +344,7 @@ func TestLimitsMiddleware_MaxQueryExpressionSizeBytes(t *testing.T) { require.Len(t, v.selectors, 1) require.NotEmpty(t, v.selectors[0].LabelMatchers) - matchers, err := toLabelMatchers(v.selectors[0].LabelMatchers) + matchers, err := remote.ToLabelMatchers(v.selectors[0].LabelMatchers) require.NoError(t, err) return matchers @@ -342,6 +440,7 @@ func TestLimitsMiddleware_MaxQueryLength(t *testing.T) { for testName, testData := range tests { t.Run(testName, func(t *testing.T) { + // NOTE: instant queries are not tested because they don't have a time range. reqs := map[string]MetricsQueryRequest{ "range query": &PrometheusRangeQueryRequest{ start: util.TimeToMillis(testData.reqStartTime), @@ -851,30 +950,3 @@ func (v *findVectorSelectorsVisitor) Visit(node parser.Node, _ []parser.Node) (p v.selectors = append(v.selectors, selector) return v, nil } - -// This function has been copied from: -// https://github.com/prometheus/prometheus/blob/5efc8dd27b6e68d5102b77bc708e52c9821c5101/storage/remote/codec.go#L569 -func toLabelMatchers(matchers []*labels.Matcher) ([]*prompb.LabelMatcher, error) { - pbMatchers := make([]*prompb.LabelMatcher, 0, len(matchers)) - for _, m := range matchers { - var mType prompb.LabelMatcher_Type - switch m.Type { - case labels.MatchEqual: - mType = prompb.LabelMatcher_EQ - case labels.MatchNotEqual: - mType = prompb.LabelMatcher_NEQ - case labels.MatchRegexp: - mType = prompb.LabelMatcher_RE - case labels.MatchNotRegexp: - mType = prompb.LabelMatcher_NRE - default: - return nil, errors.New("invalid matcher type") - } - pbMatchers = append(pbMatchers, &prompb.LabelMatcher{ - Type: mType, - Name: m.Name, - Value: m.Value, - }) - } - return pbMatchers, nil -} diff --git a/pkg/frontend/querymiddleware/model_extra.go b/pkg/frontend/querymiddleware/model_extra.go index bd5dff8e1bb..8abc8933434 100644 --- a/pkg/frontend/querymiddleware/model_extra.go +++ b/pkg/frontend/querymiddleware/model_extra.go @@ -149,11 +149,11 @@ func (r *PrometheusRangeQueryRequest) GetHints() *Hints { } // WithID clones the current `PrometheusRangeQueryRequest` with the provided ID. -func (r *PrometheusRangeQueryRequest) WithID(id int64) MetricsQueryRequest { +func (r *PrometheusRangeQueryRequest) WithID(id int64) (MetricsQueryRequest, error) { newRequest := *r newRequest.headers = cloneHeaders(r.headers) newRequest.id = id - return &newRequest + return &newRequest, nil } // WithStartEnd clones the current `PrometheusRangeQueryRequest` with a new `start` and `end` timestamp. @@ -189,14 +189,14 @@ func (r *PrometheusRangeQueryRequest) WithQuery(query string) (MetricsQueryReque } // WithHeaders clones the current `PrometheusRangeQueryRequest` with new headers. -func (r *PrometheusRangeQueryRequest) WithHeaders(headers []*PrometheusHeader) MetricsQueryRequest { +func (r *PrometheusRangeQueryRequest) WithHeaders(headers []*PrometheusHeader) (MetricsQueryRequest, error) { newRequest := *r newRequest.headers = cloneHeaders(headers) - return &newRequest + return &newRequest, nil } // WithExpr clones the current `PrometheusRangeQueryRequest` with a new query expression. -func (r *PrometheusRangeQueryRequest) WithExpr(queryExpr parser.Expr) MetricsQueryRequest { +func (r *PrometheusRangeQueryRequest) WithExpr(queryExpr parser.Expr) (MetricsQueryRequest, error) { newRequest := *r newRequest.headers = cloneHeaders(r.headers) newRequest.queryExpr = queryExpr @@ -205,12 +205,12 @@ func (r *PrometheusRangeQueryRequest) WithExpr(queryExpr parser.Expr) MetricsQue newRequest.queryExpr, newRequest.GetStart(), newRequest.GetEnd(), newRequest.GetStep(), newRequest.lookbackDelta, ) } - return &newRequest + return &newRequest, nil } // WithTotalQueriesHint clones the current `PrometheusRangeQueryRequest` with an // added Hint value for TotalQueries. -func (r *PrometheusRangeQueryRequest) WithTotalQueriesHint(totalQueries int32) MetricsQueryRequest { +func (r *PrometheusRangeQueryRequest) WithTotalQueriesHint(totalQueries int32) (MetricsQueryRequest, error) { newRequest := *r newRequest.headers = cloneHeaders(r.headers) if newRequest.hints == nil { @@ -219,12 +219,12 @@ func (r *PrometheusRangeQueryRequest) WithTotalQueriesHint(totalQueries int32) M *newRequest.hints = *(r.hints) newRequest.hints.TotalQueries = totalQueries } - return &newRequest + return &newRequest, nil } // WithEstimatedSeriesCountHint clones the current `PrometheusRangeQueryRequest` // with an added Hint value for EstimatedCardinality. -func (r *PrometheusRangeQueryRequest) WithEstimatedSeriesCountHint(count uint64) MetricsQueryRequest { +func (r *PrometheusRangeQueryRequest) WithEstimatedSeriesCountHint(count uint64) (MetricsQueryRequest, error) { newRequest := *r newRequest.headers = cloneHeaders(r.headers) if newRequest.hints == nil { @@ -235,7 +235,7 @@ func (r *PrometheusRangeQueryRequest) WithEstimatedSeriesCountHint(count uint64) *newRequest.hints = *(r.hints) newRequest.hints.CardinalityEstimate = &EstimatedSeriesCount{count} } - return &newRequest + return &newRequest, nil } // AddSpanTags writes the current `PrometheusRangeQueryRequest` parameters to the specified span tags @@ -355,11 +355,11 @@ func (r *PrometheusInstantQueryRequest) GetHints() *Hints { return r.hints } -func (r *PrometheusInstantQueryRequest) WithID(id int64) MetricsQueryRequest { +func (r *PrometheusInstantQueryRequest) WithID(id int64) (MetricsQueryRequest, error) { newRequest := *r newRequest.headers = cloneHeaders(r.headers) newRequest.id = id - return &newRequest + return &newRequest, nil } // WithStartEnd clones the current `PrometheusInstantQueryRequest` with a new `time` timestamp. @@ -394,14 +394,14 @@ func (r *PrometheusInstantQueryRequest) WithQuery(query string) (MetricsQueryReq } // WithHeaders clones the current `PrometheusRangeQueryRequest` with new headers. -func (r *PrometheusInstantQueryRequest) WithHeaders(headers []*PrometheusHeader) MetricsQueryRequest { +func (r *PrometheusInstantQueryRequest) WithHeaders(headers []*PrometheusHeader) (MetricsQueryRequest, error) { newRequest := *r newRequest.headers = cloneHeaders(headers) - return &newRequest + return &newRequest, nil } // WithExpr clones the current `PrometheusInstantQueryRequest` with a new query expression. -func (r *PrometheusInstantQueryRequest) WithExpr(queryExpr parser.Expr) MetricsQueryRequest { +func (r *PrometheusInstantQueryRequest) WithExpr(queryExpr parser.Expr) (MetricsQueryRequest, error) { newRequest := *r newRequest.headers = cloneHeaders(r.headers) newRequest.queryExpr = queryExpr @@ -410,10 +410,10 @@ func (r *PrometheusInstantQueryRequest) WithExpr(queryExpr parser.Expr) MetricsQ newRequest.queryExpr, newRequest.GetStart(), newRequest.GetEnd(), newRequest.GetStep(), newRequest.lookbackDelta, ) } - return &newRequest + return &newRequest, nil } -func (r *PrometheusInstantQueryRequest) WithTotalQueriesHint(totalQueries int32) MetricsQueryRequest { +func (r *PrometheusInstantQueryRequest) WithTotalQueriesHint(totalQueries int32) (MetricsQueryRequest, error) { newRequest := *r newRequest.headers = cloneHeaders(r.headers) if newRequest.hints == nil { @@ -422,10 +422,10 @@ func (r *PrometheusInstantQueryRequest) WithTotalQueriesHint(totalQueries int32) *newRequest.hints = *(r.hints) newRequest.hints.TotalQueries = totalQueries } - return &newRequest + return &newRequest, nil } -func (r *PrometheusInstantQueryRequest) WithEstimatedSeriesCountHint(count uint64) MetricsQueryRequest { +func (r *PrometheusInstantQueryRequest) WithEstimatedSeriesCountHint(count uint64) (MetricsQueryRequest, error) { newRequest := *r newRequest.headers = cloneHeaders(r.headers) if newRequest.hints == nil { @@ -436,7 +436,7 @@ func (r *PrometheusInstantQueryRequest) WithEstimatedSeriesCountHint(count uint6 *newRequest.hints = *(r.hints) newRequest.hints.CardinalityEstimate = &EstimatedSeriesCount{count} } - return &newRequest + return &newRequest, nil } // AddSpanTags writes query information about the current `PrometheusInstantQueryRequest` diff --git a/pkg/frontend/querymiddleware/model_extra_test.go b/pkg/frontend/querymiddleware/model_extra_test.go index c3306e67728..5bb136907d0 100644 --- a/pkg/frontend/querymiddleware/model_extra_test.go +++ b/pkg/frontend/querymiddleware/model_extra_test.go @@ -122,7 +122,8 @@ func TestMetricQueryRequestCloneHeaders(t *testing.T) { require.NoError(t, err) t.Run("WithID", func(t *testing.T) { - r := originalReq.WithID(1234) + r, err := originalReq.WithID(1234) + require.NoError(t, err) validateClonedHeaders(t, r.GetHeaders(), originalReq.GetHeaders()) }) t.Run("WithHeaders", func(t *testing.T) { @@ -131,7 +132,8 @@ func TestMetricQueryRequestCloneHeaders(t *testing.T) { {Name: "X-Test-Header", Values: []string{"test-value"}}, } - r := originalReq.WithHeaders(newHeaders) + r, err := originalReq.WithHeaders(newHeaders) + require.NoError(t, err) validateClonedHeaders(t, r.GetHeaders(), newHeaders) }) t.Run("WithStartEnd", func(t *testing.T) { @@ -147,15 +149,18 @@ func TestMetricQueryRequestCloneHeaders(t *testing.T) { validateClonedHeaders(t, r.GetHeaders(), originalReq.GetHeaders()) }) t.Run("WithTotalQueriesHint", func(t *testing.T) { - r := originalReq.WithTotalQueriesHint(10) + r, err := originalReq.WithTotalQueriesHint(10) + require.NoError(t, err) validateClonedHeaders(t, r.GetHeaders(), originalReq.GetHeaders()) }) t.Run("WithExpr", func(t *testing.T) { - r := originalReq.WithExpr(nil) + r, err := originalReq.WithExpr(nil) + require.NoError(t, err) validateClonedHeaders(t, r.GetHeaders(), originalReq.GetHeaders()) }) t.Run("WithEstimatedSeriesCountHint", func(t *testing.T) { - r := originalReq.WithEstimatedSeriesCountHint(10) + r, err := originalReq.WithEstimatedSeriesCountHint(10) + require.NoError(t, err) validateClonedHeaders(t, r.GetHeaders(), originalReq.GetHeaders()) }) }) diff --git a/pkg/frontend/querymiddleware/querysharding.go b/pkg/frontend/querymiddleware/querysharding.go index d3e118304e3..0f7c3582f8a 100644 --- a/pkg/frontend/querymiddleware/querysharding.go +++ b/pkg/frontend/querymiddleware/querysharding.go @@ -194,7 +194,21 @@ func newQuery(ctx context.Context, r MetricsQueryRequest, engine *promql.Engine, r.GetQuery(), util.TimeFromMillis(r.GetTime()), ) - + case *remoteReadQueryRequest: + return engine.NewRangeQuery( + ctx, + queryable, + // Lookback period is not applied to remote read queries in the same way + // as regular queries. However we cannot set a zero lookback period + // because the engine will just use the default 5 minutes instead. So we + // set a lookback period of 1ns and add that amount to the start time so + // the engine will calculate an effective 0 lookback period. + promql.NewPrometheusQueryOpts(false, 1*time.Nanosecond), + r.GetQuery(), + util.TimeFromMillis(r.GetStart()).Add(1*time.Nanosecond), + util.TimeFromMillis(r.GetEnd()), + time.Duration(r.GetStep())*time.Millisecond, + ) default: return nil, fmt.Errorf("unsupported query type %T", r) } diff --git a/pkg/frontend/querymiddleware/querysharding_test.go b/pkg/frontend/querymiddleware/querysharding_test.go index 52cd0249c4c..bdca1b876a1 100644 --- a/pkg/frontend/querymiddleware/querysharding_test.go +++ b/pkg/frontend/querymiddleware/querysharding_test.go @@ -1686,12 +1686,12 @@ func TestQuerySharding_ShouldUseCardinalityEstimate(t *testing.T) { }{ { "range query", - mustSucceed(req.WithStartEnd(util.TimeToMillis(start), util.TimeToMillis(end))).WithEstimatedSeriesCountHint(55_000), + mustSucceed(mustSucceed(req.WithStartEnd(util.TimeToMillis(start), util.TimeToMillis(end))).WithEstimatedSeriesCountHint(55_000)), 6, }, { "instant query", - req.WithEstimatedSeriesCountHint(29_000), + mustSucceed(req.WithEstimatedSeriesCountHint(29_000)), 3, }, { diff --git a/pkg/frontend/querymiddleware/remote_read.go b/pkg/frontend/querymiddleware/remote_read.go index 8d0a80bb4f1..c40f05cdf7e 100644 --- a/pkg/frontend/querymiddleware/remote_read.go +++ b/pkg/frontend/querymiddleware/remote_read.go @@ -241,9 +241,7 @@ func (r *remoteReadQueryRequest) GetHints() *Hints { } func (r *remoteReadQueryRequest) GetStep() int64 { - if r.query.Hints != nil { - return r.query.Hints.GetStepMs() - } + // Step is ignored when the remote read query is executed. return 0 } @@ -252,10 +250,14 @@ func (r *remoteReadQueryRequest) GetID() int64 { } func (r *remoteReadQueryRequest) GetMaxT() int64 { + // MaxT hint is ignored when the remote read query is executed. + // Therefore we return the end time. return r.GetEnd() } func (r *remoteReadQueryRequest) GetMinT() int64 { + // MinT hint is ignored when the remote read query is executed. + // Therefore we return the start time. return r.GetStart() } @@ -275,24 +277,24 @@ func (r *remoteReadQueryRequest) GetHeaders() []*PrometheusHeader { return nil } -func (r *remoteReadQueryRequest) WithID(_ int64) MetricsQueryRequest { - panic("not implemented") +func (r *remoteReadQueryRequest) WithID(_ int64) (MetricsQueryRequest, error) { + return nil, apierror.New(apierror.TypeInternal, "remoteReadQueryRequest.WithID not implemented") } -func (r *remoteReadQueryRequest) WithEstimatedSeriesCountHint(_ uint64) MetricsQueryRequest { - panic("not implemented") +func (r *remoteReadQueryRequest) WithEstimatedSeriesCountHint(_ uint64) (MetricsQueryRequest, error) { + return nil, apierror.New(apierror.TypeInternal, "remoteReadQueryRequest.WithEstimatedSeriesCountHint not implemented") } -func (r *remoteReadQueryRequest) WithExpr(_ parser.Expr) MetricsQueryRequest { - panic("not implemented") +func (r *remoteReadQueryRequest) WithExpr(_ parser.Expr) (MetricsQueryRequest, error) { + return nil, apierror.New(apierror.TypeInternal, "remoteReadQueryRequest.WithExpr not implemented") } func (r *remoteReadQueryRequest) WithQuery(_ string) (MetricsQueryRequest, error) { - panic("not implemented") + return nil, apierror.New(apierror.TypeInternal, "remoteReadQueryRequest.WithQuery not implemented") } -func (r *remoteReadQueryRequest) WithHeaders(_ []*PrometheusHeader) MetricsQueryRequest { - panic("not implemented") +func (r *remoteReadQueryRequest) WithHeaders(_ []*PrometheusHeader) (MetricsQueryRequest, error) { + return nil, apierror.New(apierror.TypeInternal, "remoteReadQueryRequest.WithHeaders not implemented") } // WithStartEnd clones the current remoteReadQueryRequest with a new start and end timestamp. @@ -318,8 +320,8 @@ func (r *remoteReadQueryRequest) WithStartEnd(start int64, end int64) (MetricsQu return remoteReadToMetricsQueryRequest(r.path, clonedQuery) } -func (r *remoteReadQueryRequest) WithTotalQueriesHint(_ int32) MetricsQueryRequest { - panic("not implemented") +func (r *remoteReadQueryRequest) WithTotalQueriesHint(_ int32) (MetricsQueryRequest, error) { + return nil, apierror.New(apierror.TypeInternal, "remoteReadQueryRequest.WithTotalQueriesHint not implemented") } // cloneRemoteReadQuery returns a deep copy of the input prompb.Query. To keep this function safe, diff --git a/pkg/frontend/querymiddleware/remote_read_test.go b/pkg/frontend/querymiddleware/remote_read_test.go index 24d7a7ee82a..9e760d9c15a 100644 --- a/pkg/frontend/querymiddleware/remote_read_test.go +++ b/pkg/frontend/querymiddleware/remote_read_test.go @@ -436,11 +436,19 @@ func makeTestRemoteReadRequest() *prompb.ReadRequest { } } -// This is not a full test yet, only tests what's needed for the query blocker. +// This is not a full test yet, only tests what's needed for the query blocker and stats. func TestRemoteReadToMetricsQueryRequest(t *testing.T) { - remoteReadRequest := &prompb.ReadRequest{ - Queries: []*prompb.Query{ - { + testCases := map[string]struct { + query *prompb.Query + expectedQuery string + expectedStep int64 + expectedStart int64 + expectedEnd int64 + expectedMinT int64 + expectedMaxT int64 + }{ + "query without hints": { + query: &prompb.Query{ Matchers: []*prompb.LabelMatcher{ {Name: "__name__", Type: prompb.LabelMatcher_EQ, Value: "some_metric"}, {Name: "foo", Type: prompb.LabelMatcher_RE, Value: ".*bar.*"}, @@ -448,25 +456,45 @@ func TestRemoteReadToMetricsQueryRequest(t *testing.T) { StartTimestampMs: 10, EndTimestampMs: 20, }, - { + expectedQuery: "{__name__=\"some_metric\",foo=~\".*bar.*\"}", + expectedStep: 0, + expectedStart: 10, + expectedEnd: 20, + expectedMinT: 10, + expectedMaxT: 20, + }, + "query with hints": { + query: &prompb.Query{ Matchers: []*prompb.LabelMatcher{ {Name: "__name__", Type: prompb.LabelMatcher_EQ, Value: "up"}, }, + StartTimestampMs: 10, + EndTimestampMs: 20, Hints: &prompb.ReadHints{ - StepMs: 1000, + StartMs: 5, + EndMs: 25, + StepMs: 1000, }, }, + expectedQuery: "{__name__=\"up\"}", + expectedStep: 0, + expectedStart: 10, + expectedEnd: 20, + expectedMinT: 10, + expectedMaxT: 20, }, } - expectedGetQuery := []string{ - "{__name__=\"some_metric\",foo=~\".*bar.*\"}", - "{__name__=\"up\"}", - } - - for i, query := range remoteReadRequest.Queries { - metricsQR, err := remoteReadToMetricsQueryRequest("something", query) - require.NoError(t, err) - require.Equal(t, expectedGetQuery[i], metricsQR.GetQuery()) + for name, tc := range testCases { + t.Run(name, func(t *testing.T) { + metricsQR, err := remoteReadToMetricsQueryRequest("something", tc.query) + require.NoError(t, err) + require.Equal(t, tc.expectedQuery, metricsQR.GetQuery()) + require.Equal(t, tc.expectedStep, metricsQR.GetStep()) + require.Equal(t, tc.expectedStart, metricsQR.GetStart()) + require.Equal(t, tc.expectedEnd, metricsQR.GetEnd()) + require.Equal(t, tc.expectedMinT, metricsQR.GetMinT()) + require.Equal(t, tc.expectedMaxT, metricsQR.GetMaxT()) + }) } } diff --git a/pkg/frontend/querymiddleware/roundtrip.go b/pkg/frontend/querymiddleware/roundtrip.go index 58aff4b7333..c504019801b 100644 --- a/pkg/frontend/querymiddleware/roundtrip.go +++ b/pkg/frontend/querymiddleware/roundtrip.go @@ -313,6 +313,8 @@ func newQueryMiddlewares( queryStatsMiddleware := newQueryStatsMiddleware(registerer, engine) remoteReadMiddleware = append(remoteReadMiddleware, + // Track query range statistics. Added first before any subsequent middleware modifies the request. + queryStatsMiddleware, newLimitsMiddleware(limits, log), queryBlockerMiddleware) diff --git a/pkg/frontend/querymiddleware/roundtrip_test.go b/pkg/frontend/querymiddleware/roundtrip_test.go index aa5fa3219f2..8650547ab40 100644 --- a/pkg/frontend/querymiddleware/roundtrip_test.go +++ b/pkg/frontend/querymiddleware/roundtrip_test.go @@ -569,7 +569,6 @@ func TestMiddlewaresConsistency(t *testing.T) { exceptions: []string{ "instrumentMiddleware", "querySharding", // No query sharding support. - "queryStatsMiddleware", "retry", "splitAndCacheMiddleware", // No time splitting and results cache support. "splitInstantQueryByIntervalMiddleware", // Not applicable because specific to instant queries. diff --git a/pkg/frontend/querymiddleware/split_and_cache.go b/pkg/frontend/querymiddleware/split_and_cache.go index 30c66035039..12b3c82d4b4 100644 --- a/pkg/frontend/querymiddleware/split_and_cache.go +++ b/pkg/frontend/querymiddleware/split_and_cache.go @@ -215,7 +215,10 @@ func (s *splitAndCacheMiddleware) Do(ctx context.Context, req MetricsQueryReques } // Prepare and execute the downstream requests. - execReqs := splitReqs.prepareDownstreamRequests() + execReqs, err := splitReqs.prepareDownstreamRequests() + if err != nil { + return nil, err + } // Update query stats. // Only consider the actual number of downstream requests, not the cache hits. @@ -519,12 +522,12 @@ func (s *splitRequests) countDownstreamResponseBytes() int { // prepareDownstreamRequests injects a unique ID and hints to all downstream requests and // initialize downstream responses slice to have the same length of requests. -func (s *splitRequests) prepareDownstreamRequests() []MetricsQueryRequest { +func (s *splitRequests) prepareDownstreamRequests() ([]MetricsQueryRequest, error) { // Count the total number of downstream requests to run and build the hints we're going // to attach to each request. numDownstreamRequests := s.countDownstreamRequests() if numDownstreamRequests == 0 { - return nil + return nil, nil } // Build the whole list of requests to execute. For each downstream request, @@ -535,7 +538,15 @@ func (s *splitRequests) prepareDownstreamRequests() []MetricsQueryRequest { execReqs := make([]MetricsQueryRequest, 0, numDownstreamRequests) for _, splitReq := range *s { for i := 0; i < len(splitReq.downstreamRequests); i++ { - splitReq.downstreamRequests[i] = splitReq.downstreamRequests[i].WithID(nextReqID).WithTotalQueriesHint(int32(numDownstreamRequests)) + newRequest, err := splitReq.downstreamRequests[i].WithID(nextReqID) + if err != nil { + return nil, err + } + newRequest, err = newRequest.WithTotalQueriesHint(int32(numDownstreamRequests)) + if err != nil { + return nil, err + } + splitReq.downstreamRequests[i] = newRequest nextReqID++ } @@ -543,7 +554,7 @@ func (s *splitRequests) prepareDownstreamRequests() []MetricsQueryRequest { splitReq.downstreamResponses = make([]Response, len(splitReq.downstreamRequests)) } - return execReqs + return execReqs, nil } // storeDownstreamResponses associates the given executed requestResponse with the downstream requests diff --git a/pkg/frontend/querymiddleware/split_and_cache_test.go b/pkg/frontend/querymiddleware/split_and_cache_test.go index 1c1a8c87290..cef551e65ff 100644 --- a/pkg/frontend/querymiddleware/split_and_cache_test.go +++ b/pkg/frontend/querymiddleware/split_and_cache_test.go @@ -1319,9 +1319,9 @@ func TestSplitRequests_prepareDownstreamRequests(t *testing.T) { {downstreamRequests: []MetricsQueryRequest{&PrometheusRangeQueryRequest{start: 3}}}, }, expected: []MetricsQueryRequest{ - (&PrometheusRangeQueryRequest{start: 1}).WithID(1).WithTotalQueriesHint(3), - (&PrometheusRangeQueryRequest{start: 2}).WithID(2).WithTotalQueriesHint(3), - (&PrometheusRangeQueryRequest{start: 3}).WithID(3).WithTotalQueriesHint(3), + mustSucceed(mustSucceed((&PrometheusRangeQueryRequest{start: 1}).WithID(1)).WithTotalQueriesHint(3)), + mustSucceed(mustSucceed((&PrometheusRangeQueryRequest{start: 2}).WithID(2)).WithTotalQueriesHint(3)), + mustSucceed(mustSucceed((&PrometheusRangeQueryRequest{start: 3}).WithID(3)).WithTotalQueriesHint(3)), }, }, } @@ -1333,7 +1333,7 @@ func TestSplitRequests_prepareDownstreamRequests(t *testing.T) { require.Empty(t, req.downstreamResponses) } - assert.Equal(t, testData.expected, testData.input.prepareDownstreamRequests()) + assert.Equal(t, testData.expected, mustSucceed(testData.input.prepareDownstreamRequests())) // Ensure responses slices have been initialized. for _, req := range testData.input { diff --git a/pkg/frontend/querymiddleware/split_by_instant_interval.go b/pkg/frontend/querymiddleware/split_by_instant_interval.go index 475b22113c7..ea65e16e635 100644 --- a/pkg/frontend/querymiddleware/split_by_instant_interval.go +++ b/pkg/frontend/querymiddleware/split_by_instant_interval.go @@ -171,7 +171,14 @@ func (s *splitInstantQueryByIntervalMiddleware) Do(ctx context.Context, req Metr s.metrics.splitQueriesPerQuery.Observe(float64(mapperStats.GetSplitQueries())) // Send hint with number of embedded queries to the sharding middleware - req = req.WithExpr(instantSplitQuery).WithTotalQueriesHint(int32(mapperStats.GetSplitQueries())) + req, err = req.WithExpr(instantSplitQuery) + if err != nil { + return nil, err + } + req, err = req.WithTotalQueriesHint(int32(mapperStats.GetSplitQueries())) + if err != nil { + return nil, err + } shardedQueryable := newShardedQueryable(req, s.next) qry, err := newQuery(ctx, req, s.engine, lazyquery.NewLazyQueryable(shardedQueryable)) diff --git a/pkg/frontend/querymiddleware/stats.go b/pkg/frontend/querymiddleware/stats.go index 4e4c32e779f..8e2b919d16c 100644 --- a/pkg/frontend/querymiddleware/stats.go +++ b/pkg/frontend/querymiddleware/stats.go @@ -98,8 +98,15 @@ func (s queryStatsMiddleware) populateQueryDetails(ctx context.Context, req Metr if details == nil { return } - details.Start = time.UnixMilli(req.GetStart()) - details.End = time.UnixMilli(req.GetEnd()) + // This middleware may run multiple times for the same request in case of a remote read request + // (once for each query in the request). In such case, we compute the start/end time as the min/max + // timestamp we see across all queries in the request. + if details.Start.IsZero() || details.Start.After(time.UnixMilli(req.GetStart())) { + details.Start = time.UnixMilli(req.GetStart()) + } + if details.End.IsZero() || details.End.Before(time.UnixMilli(req.GetEnd())) { + details.End = time.UnixMilli(req.GetEnd()) + } details.Step = time.Duration(req.GetStep()) * time.Millisecond query, err := newQuery(ctx, req, s.engine, queryStatsErrQueryable) @@ -113,10 +120,13 @@ func (s queryStatsMiddleware) populateQueryDetails(ctx context.Context, req Metr return } minT, maxT := promql.FindMinMaxTime(evalStmt) - if minT != 0 { + // This middleware may run multiple times for the same request in case of a remote read request + // (once for each query in the request). In such case, we compute the minT/maxT time as the min/max + // timestamp we see across all queries in the request. + if minT != 0 && (details.MinT.IsZero() || details.MinT.After(time.UnixMilli(minT))) { details.MinT = time.UnixMilli(minT) } - if maxT != 0 { + if maxT != 0 && (details.MaxT.IsZero() || details.MaxT.Before(time.UnixMilli(maxT))) { details.MaxT = time.UnixMilli(maxT) } } diff --git a/pkg/frontend/querymiddleware/stats_test.go b/pkg/frontend/querymiddleware/stats_test.go index 62da2f7e757..cae4d887e83 100644 --- a/pkg/frontend/querymiddleware/stats_test.go +++ b/pkg/frontend/querymiddleware/stats_test.go @@ -11,6 +11,7 @@ import ( "github.com/grafana/dskit/user" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/testutil" + "github.com/prometheus/prometheus/prompb" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -23,24 +24,22 @@ func Test_queryStatsMiddleware_Do(t *testing.T) { const tenantID = "test" type args struct { ctx context.Context - req MetricsQueryRequest + req []MetricsQueryRequest // Stats are cumulative, in particular because we don't split up remote read queries. } - tests := []struct { - name string + tests := map[string]struct { args args expectedMetrics *strings.Reader expectedQueryDetails QueryDetails }{ - { - name: "happy path", + "happy path range query": { args: args{ - req: &PrometheusRangeQueryRequest{ + req: []MetricsQueryRequest{&PrometheusRangeQueryRequest{ path: "/query_range", start: util.TimeToMillis(start), end: util.TimeToMillis(end), step: step.Milliseconds(), queryExpr: parseQuery(t, `sum(sum_over_time(metric{app="test",namespace=~"short"}[5m]))`), - }, + }}, }, expectedMetrics: strings.NewReader(` # HELP cortex_query_frontend_non_step_aligned_queries_total Total queries sent that are not step aligned. @@ -62,17 +61,16 @@ func Test_queryStatsMiddleware_Do(t *testing.T) { Step: step, }, }, - { - name: "explicit consistency", + "explicit consistency range query": { args: args{ ctx: querierapi.ContextWithReadConsistency(context.Background(), querierapi.ReadConsistencyStrong), - req: &PrometheusRangeQueryRequest{ + req: []MetricsQueryRequest{&PrometheusRangeQueryRequest{ path: "/query_range", start: util.TimeToMillis(start), end: util.TimeToMillis(end), step: step.Milliseconds(), queryExpr: parseQuery(t, `sum(sum_over_time(metric{app="test",namespace=~"short"}[5m]))`), - }, + }}, }, expectedMetrics: strings.NewReader(` # HELP cortex_query_frontend_non_step_aligned_queries_total Total queries sent that are not step aligned. @@ -97,9 +95,135 @@ func Test_queryStatsMiddleware_Do(t *testing.T) { Step: step, }, }, + "instant query": { + args: args{ + req: []MetricsQueryRequest{NewPrometheusInstantQueryRequest( + "/query", + nil, + start.Truncate(time.Millisecond).UnixMilli(), + 5*time.Minute, + parseQuery(t, `sum(metric{app="test",namespace=~"short"})`), + Options{}, + nil, + )}, + }, + expectedMetrics: strings.NewReader(` + # HELP cortex_query_frontend_non_step_aligned_queries_total Total queries sent that are not step aligned. + # TYPE cortex_query_frontend_non_step_aligned_queries_total counter + cortex_query_frontend_non_step_aligned_queries_total 0 + # HELP cortex_query_frontend_regexp_matcher_count Total number of regexp matchers + # TYPE cortex_query_frontend_regexp_matcher_count counter + cortex_query_frontend_regexp_matcher_count 1 + # HELP cortex_query_frontend_regexp_matcher_optimized_count Total number of optimized regexp matchers + # TYPE cortex_query_frontend_regexp_matcher_optimized_count counter + cortex_query_frontend_regexp_matcher_optimized_count 1 + `), + expectedQueryDetails: QueryDetails{ + QuerierStats: &querier_stats.Stats{}, + Start: start.Truncate(time.Millisecond), + End: start.Truncate(time.Millisecond), + MinT: start.Truncate(time.Millisecond).Add(-5 * time.Minute), + MaxT: start.Truncate(time.Millisecond), + }, + }, + "remote read queries without hints": { + args: args{ + req: []MetricsQueryRequest{ + mustSucceed(remoteReadToMetricsQueryRequest( + "/read", + &prompb.Query{ + StartTimestampMs: start.Truncate(time.Millisecond).UnixMilli(), + EndTimestampMs: end.Truncate(time.Millisecond).Add(10 * time.Minute).UnixMilli(), + Matchers: []*prompb.LabelMatcher{ + { + Type: prompb.LabelMatcher_RE, + Name: "app", + Value: "test", + }, + }, + }, + )), + mustSucceed(remoteReadToMetricsQueryRequest( + "/read", + &prompb.Query{ + StartTimestampMs: start.Truncate(time.Millisecond).Add(-30 * time.Minute).UnixMilli(), + EndTimestampMs: end.Truncate(time.Millisecond).UnixMilli(), + Matchers: []*prompb.LabelMatcher{ + { + Type: prompb.LabelMatcher_RE, + Name: "app", + Value: "test", + }, + }, + }, + )), + }, + }, + expectedMetrics: strings.NewReader(` + # HELP cortex_query_frontend_non_step_aligned_queries_total Total queries sent that are not step aligned. + # TYPE cortex_query_frontend_non_step_aligned_queries_total counter + cortex_query_frontend_non_step_aligned_queries_total 0 + # HELP cortex_query_frontend_regexp_matcher_count Total number of regexp matchers + # TYPE cortex_query_frontend_regexp_matcher_count counter + cortex_query_frontend_regexp_matcher_count 2 + # HELP cortex_query_frontend_regexp_matcher_optimized_count Total number of optimized regexp matchers + # TYPE cortex_query_frontend_regexp_matcher_optimized_count counter + cortex_query_frontend_regexp_matcher_optimized_count 2 + `), + expectedQueryDetails: QueryDetails{ + QuerierStats: &querier_stats.Stats{}, + Start: start.Truncate(time.Millisecond).Add(-30 * time.Minute), + End: end.Truncate(time.Millisecond).Add(10 * time.Minute), + MinT: start.Truncate(time.Millisecond).Add(-30 * time.Minute), + MaxT: end.Truncate(time.Millisecond).Add(10 * time.Minute), + }, + }, + "remote read queries with hints": { + args: args{ + req: []MetricsQueryRequest{ + mustSucceed(remoteReadToMetricsQueryRequest( + "/read", + &prompb.Query{ + StartTimestampMs: start.Truncate(time.Millisecond).UnixMilli(), + EndTimestampMs: end.Truncate(time.Millisecond).Add(10 * time.Minute).UnixMilli(), + Matchers: []*prompb.LabelMatcher{ + { + Type: prompb.LabelMatcher_RE, + Name: "app", + Value: "test", + }, + }, + Hints: &prompb.ReadHints{ + // These are ignored in queries, we expect no effect on statistics. + StartMs: start.Truncate(time.Millisecond).Add(-10 * time.Minute).UnixMilli(), + EndMs: end.Truncate(time.Millisecond).Add(20 * time.Minute).UnixMilli(), + }, + }, + )), + }, + }, + expectedMetrics: strings.NewReader(` + # HELP cortex_query_frontend_non_step_aligned_queries_total Total queries sent that are not step aligned. + # TYPE cortex_query_frontend_non_step_aligned_queries_total counter + cortex_query_frontend_non_step_aligned_queries_total 0 + # HELP cortex_query_frontend_regexp_matcher_count Total number of regexp matchers + # TYPE cortex_query_frontend_regexp_matcher_count counter + cortex_query_frontend_regexp_matcher_count 1 + # HELP cortex_query_frontend_regexp_matcher_optimized_count Total number of optimized regexp matchers + # TYPE cortex_query_frontend_regexp_matcher_optimized_count counter + cortex_query_frontend_regexp_matcher_optimized_count 1 + `), + expectedQueryDetails: QueryDetails{ + QuerierStats: &querier_stats.Stats{}, + Start: start.Truncate(time.Millisecond), + End: end.Truncate(time.Millisecond).Add(10 * time.Minute), + MinT: start.Truncate(time.Millisecond), + MaxT: end.Truncate(time.Millisecond).Add(10 * time.Minute), + }, + }, } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { + for name, tt := range tests { + t.Run(name, func(t *testing.T) { reg := prometheus.NewPedanticRegistry() mw := newQueryStatsMiddleware(reg, newEngine()) ctx := context.Background() @@ -109,9 +233,11 @@ func Test_queryStatsMiddleware_Do(t *testing.T) { actualDetails, ctx := ContextWithEmptyDetails(ctx) ctx = user.InjectOrgID(ctx, tenantID) - _, err := mw.Wrap(mockHandlerWith(nil, nil)).Do(ctx, tt.args.req) + for _, req := range tt.args.req { + _, err := mw.Wrap(mockHandlerWith(nil, nil)).Do(ctx, req) + require.NoError(t, err) + } - require.NoError(t, err) assert.NoError(t, testutil.GatherAndCompare(reg, tt.expectedMetrics)) assert.Equal(t, tt.expectedQueryDetails, *actualDetails) }) diff --git a/pkg/ingester/activeseries/custom_trackers_config.go b/pkg/ingester/activeseries/custom_trackers_config.go index 16156bee4a7..dfa4c097303 100644 --- a/pkg/ingester/activeseries/custom_trackers_config.go +++ b/pkg/ingester/activeseries/custom_trackers_config.go @@ -5,6 +5,7 @@ package activeseries import ( "fmt" "math" + "reflect" "strings" amlabels "github.com/prometheus/alertmanager/pkg/labels" @@ -43,6 +44,31 @@ func (c CustomTrackersConfig) String() string { return c.string } +// Equal compares two CustomTrackersConfig. This is needed to allow cmp.Equal to compare two CustomTrackersConfig. +func (c CustomTrackersConfig) Equal(other CustomTrackersConfig) bool { + if c.string != other.string { + return false + } + + if len(c.source) != len(other.source) { + return false + } + + if len(c.config) != len(other.config) { + return false + } + + if !reflect.DeepEqual(c.source, other.source) { + return false + } + + if !reflect.DeepEqual(c.config, other.config) { + return false + } + + return true +} + func customTrackersConfigString(cfg map[string]string) string { if len(cfg) == 0 { return "" diff --git a/pkg/ingester/activeseries/custom_trackers_config_test.go b/pkg/ingester/activeseries/custom_trackers_config_test.go index 7235510c517..47baf85d772 100644 --- a/pkg/ingester/activeseries/custom_trackers_config_test.go +++ b/pkg/ingester/activeseries/custom_trackers_config_test.go @@ -8,6 +8,7 @@ import ( "fmt" "testing" + "github.com/google/go-cmp/cmp" "github.com/pkg/errors" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -253,3 +254,90 @@ func TestTrackersConfigs_SerializeDeserialize(t *testing.T) { assert.Equal(t, obj, reSerialized) }) } + +func TestCustomTrackersConfig_Equal(t *testing.T) { + tc := map[string]struct { + cfg1 CustomTrackersConfig + cfg2 CustomTrackersConfig + expected bool + }{ + + "Equal configurations": { + cfg1: mustNewCustomTrackersConfigFromString(t, `foo:{foo="bar"};baz:{baz="bar"}`), + cfg2: mustNewCustomTrackersConfigFromString(t, `foo:{foo="bar"};baz:{baz="bar"}`), + expected: true, + }, + "Different descriptions": { + cfg1: func() CustomTrackersConfig { + c := mustNewCustomTrackersConfigFromString(t, `foo:{foo="bar"};baz:{baz="bar"}`) + c.string = "cfg1" + return c + }(), + cfg2: func() CustomTrackersConfig { + c := mustNewCustomTrackersConfigFromString(t, `foo:{foo="bar"};baz:{baz="bar"}`) + c.string = "cfg2" + return c + }(), + expected: false, + }, + "Different source maps": { + cfg1: func() CustomTrackersConfig { + c := mustNewCustomTrackersConfigFromString(t, `foo:{foo="bar"};baz:{baz="bar"}`) + c.source = map[string]string{"a": "source1"} + return c + }(), + cfg2: func() CustomTrackersConfig { + c := mustNewCustomTrackersConfigFromString(t, `foo:{foo="bar"};baz:{baz="bar"}`) + c.source = map[string]string{"b": "source2"} + return c + }(), + expected: false, + }, + "Different config maps": { + cfg1: func() CustomTrackersConfig { + c := mustNewCustomTrackersConfigFromString(t, `foo:{foo="bar"};baz:{baz="bar"}`) + c.config = map[string]labelsMatchers{"a": nil} + return c + }(), + cfg2: func() CustomTrackersConfig { + c := mustNewCustomTrackersConfigFromString(t, `foo:{foo="bar"};baz:{baz="bar"}`) + c.config = map[string]labelsMatchers{"b": nil} + return c + }(), + expected: false, + }, + "Different keys in source maps": { + cfg1: CustomTrackersConfig{ + source: map[string]string{"a": "source1"}, + config: map[string]labelsMatchers{"b": nil}, + string: "cfg1", + }, + cfg2: CustomTrackersConfig{ + source: map[string]string{"c": "source1"}, + config: map[string]labelsMatchers{"b": nil}, + string: "cfg1", + }, + expected: false, + }, + "Different keys in config maps": { + cfg1: CustomTrackersConfig{ + source: map[string]string{"a": "source1"}, + config: map[string]labelsMatchers{"b": nil}, + string: "cfg1", + }, + cfg2: CustomTrackersConfig{ + source: map[string]string{"a": "source1"}, + config: map[string]labelsMatchers{"c": nil}, + string: "cfg1", + }, + expected: false, + }, + } + + for name, tt := range tc { + t.Run(name, func(t *testing.T) { + assert.Equal(t, tt.expected, tt.cfg1.Equal(tt.cfg2)) + assert.Equal(t, tt.expected, cmp.Equal(tt.cfg1, tt.cfg2)) + }) + } +} diff --git a/pkg/ingester/circuitbreaker.go b/pkg/ingester/circuitbreaker.go index a5640b19dfc..f4dc7370bbc 100644 --- a/pkg/ingester/circuitbreaker.go +++ b/pkg/ingester/circuitbreaker.go @@ -32,8 +32,9 @@ const ( ) type circuitBreakerMetrics struct { - circuitBreakerTransitions *prometheus.CounterVec - circuitBreakerResults *prometheus.CounterVec + circuitBreakerTransitions *prometheus.CounterVec + circuitBreakerResults *prometheus.CounterVec + circuitBreakerRequestTimeouts prometheus.Counter } func newCircuitBreakerMetrics(r prometheus.Registerer, currentState func() circuitbreaker.State, requestType string) *circuitBreakerMetrics { @@ -48,6 +49,11 @@ func newCircuitBreakerMetrics(r prometheus.Registerer, currentState func() circu Help: "Results of executing requests via the circuit breaker.", ConstLabels: map[string]string{circuitBreakerRequestTypeLabel: requestType}, }, []string{"result"}), + circuitBreakerRequestTimeouts: promauto.With(r).NewCounter(prometheus.CounterOpts{ + Name: "cortex_ingester_circuit_breaker_request_timeouts_total", + Help: "Number of times the circuit breaker recorded a request that reached timeout.", + ConstLabels: map[string]string{circuitBreakerRequestTypeLabel: requestType}, + }), } circuitBreakerCurrentStateGauge := func(state circuitbreaker.State) prometheus.GaugeFunc { return promauto.With(r).NewGaugeFunc(prometheus.GaugeOpts{ @@ -154,7 +160,16 @@ func newCircuitBreaker(cfg CircuitBreakerConfig, registerer prometheus.Registere return &cb } -func isCircuitBreakerFailure(err error) bool { +func isDeadlineExceeded(err error) bool { + if errors.Is(err, context.DeadlineExceeded) { + return true + } + + statusCode := grpcutil.ErrorToStatusCode(err) + return statusCode == codes.DeadlineExceeded +} + +func (cb *circuitBreaker) tryRecordFailure(err error) bool { if err == nil { return false } @@ -163,20 +178,23 @@ func isCircuitBreakerFailure(err error) bool { // to be errors worthy of tripping the circuit breaker since these // are specific to a particular ingester, not a user or request. - if errors.Is(err, context.DeadlineExceeded) { - return true + isFailure := false + if isDeadlineExceeded(err) { + cb.metrics.circuitBreakerRequestTimeouts.Inc() + isFailure = true + } else { + var ingesterErr ingesterError + if errors.As(err, &ingesterErr) { + isFailure = ingesterErr.errorCause() == mimirpb.INSTANCE_LIMIT + } } - statusCode := grpcutil.ErrorToStatusCode(err) - if statusCode == codes.DeadlineExceeded { + if isFailure { + cb.cb.RecordFailure() + cb.metrics.circuitBreakerResults.WithLabelValues(circuitBreakerResultError).Inc() return true } - var ingesterErr ingesterError - if errors.As(err, &ingesterErr) { - return ingesterErr.errorCause() == mimirpb.INSTANCE_LIMIT - } - return false } @@ -245,9 +263,7 @@ func (cb *circuitBreaker) recordResult(errs ...error) error { } for _, err := range errs { - if err != nil && isCircuitBreakerFailure(err) { - cb.cb.RecordFailure() - cb.metrics.circuitBreakerResults.WithLabelValues(circuitBreakerResultError).Inc() + if cb.tryRecordFailure(err) { return err } } diff --git a/pkg/ingester/circuitbreaker_test.go b/pkg/ingester/circuitbreaker_test.go index de57b77f8bf..1e5f1ca698c 100644 --- a/pkg/ingester/circuitbreaker_test.go +++ b/pkg/ingester/circuitbreaker_test.go @@ -28,32 +28,34 @@ import ( "github.com/grafana/mimir/pkg/util/validation" ) -func TestIsFailure(t *testing.T) { +func TestCircuitBreaker_TryRecordFailure(t *testing.T) { + cfg := CircuitBreakerConfig{Enabled: true} + cb := newCircuitBreaker(cfg, prometheus.NewRegistry(), "test-request-type", log.NewNopLogger()) t.Run("no error", func(t *testing.T) { - require.False(t, isCircuitBreakerFailure(nil)) + require.False(t, cb.tryRecordFailure(nil)) }) t.Run("context cancelled", func(t *testing.T) { - require.False(t, isCircuitBreakerFailure(context.Canceled)) - require.False(t, isCircuitBreakerFailure(fmt.Errorf("%w", context.Canceled))) + require.False(t, cb.tryRecordFailure(context.Canceled)) + require.False(t, cb.tryRecordFailure(fmt.Errorf("%w", context.Canceled))) }) t.Run("gRPC context cancelled", func(t *testing.T) { err := status.Error(codes.Canceled, "cancelled!") - require.False(t, isCircuitBreakerFailure(err)) - require.False(t, isCircuitBreakerFailure(fmt.Errorf("%w", err))) + require.False(t, cb.tryRecordFailure(err)) + require.False(t, cb.tryRecordFailure(fmt.Errorf("%w", err))) }) t.Run("gRPC deadline exceeded", func(t *testing.T) { err := status.Error(codes.DeadlineExceeded, "broken!") - require.True(t, isCircuitBreakerFailure(err)) - require.True(t, isCircuitBreakerFailure(fmt.Errorf("%w", err))) + require.True(t, cb.tryRecordFailure(err)) + require.True(t, cb.tryRecordFailure(fmt.Errorf("%w", err))) }) t.Run("gRPC unavailable with INSTANCE_LIMIT details", func(t *testing.T) { err := newInstanceLimitReachedError("broken") - require.True(t, isCircuitBreakerFailure(err)) - require.True(t, isCircuitBreakerFailure(fmt.Errorf("%w", err))) + require.True(t, cb.tryRecordFailure(err)) + require.True(t, cb.tryRecordFailure(fmt.Errorf("%w", err))) }) t.Run("gRPC unavailable with SERVICE_UNAVAILABLE details is not a failure", func(t *testing.T) { @@ -61,14 +63,14 @@ func TestIsFailure(t *testing.T) { stat, err := stat.WithDetails(&mimirpb.ErrorDetails{Cause: mimirpb.SERVICE_UNAVAILABLE}) require.NoError(t, err) err = stat.Err() - require.False(t, isCircuitBreakerFailure(err)) - require.False(t, isCircuitBreakerFailure(fmt.Errorf("%w", err))) + require.False(t, cb.tryRecordFailure(err)) + require.False(t, cb.tryRecordFailure(fmt.Errorf("%w", err))) }) t.Run("gRPC unavailable without details is not a failure", func(t *testing.T) { err := status.Error(codes.Unavailable, "broken!") - require.False(t, isCircuitBreakerFailure(err)) - require.False(t, isCircuitBreakerFailure(fmt.Errorf("%w", err))) + require.False(t, cb.tryRecordFailure(err)) + require.False(t, cb.tryRecordFailure(fmt.Errorf("%w", err))) }) } @@ -175,6 +177,7 @@ func TestCircuitBreaker_TryAcquirePermit(t *testing.T) { func TestCircuitBreaker_RecordResult(t *testing.T) { metricNames := []string{ "cortex_ingester_circuit_breaker_results_total", + "cortex_ingester_circuit_breaker_request_timeouts_total", } testCases := map[string]struct { errs []error @@ -190,6 +193,9 @@ func TestCircuitBreaker_RecordResult(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="success"} 1 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="error"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="circuit_breaker_open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="test-request-type"} 0 `, }, "erroneous execution not passing the failure check records a success": { @@ -201,6 +207,9 @@ func TestCircuitBreaker_RecordResult(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="success"} 1 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="error"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="circuit_breaker_open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="test-request-type"} 0 `, }, "erroneous execution passing the failure check records an error": { @@ -212,6 +221,9 @@ func TestCircuitBreaker_RecordResult(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="success"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="error"} 1 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="circuit_breaker_open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="test-request-type"} 1 `, }, "erroneous execution with multiple errors records the first error passing the failure check": { @@ -223,6 +235,9 @@ func TestCircuitBreaker_RecordResult(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="success"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="error"} 1 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="circuit_breaker_open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="test-request-type"} 1 `, }, } @@ -242,6 +257,7 @@ func TestCircuitBreaker_RecordResult(t *testing.T) { func TestCircuitBreaker_FinishRequest(t *testing.T) { metricNames := []string{ "cortex_ingester_circuit_breaker_results_total", + "cortex_ingester_circuit_breaker_request_timeouts_total", } instanceLimitReachedErr := newInstanceLimitReachedError("error") maxRequestDuration := 2 * time.Second @@ -262,6 +278,9 @@ func TestCircuitBreaker_FinishRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="success"} 1 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="error"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="circuit_breaker_open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="test-request-type"} 0 `, }, "with circuit breaker not active, requestDuration lower than maxRequestDuration and no input error, finishRequest does nothing": { @@ -274,6 +293,9 @@ func TestCircuitBreaker_FinishRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="success"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="error"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="circuit_breaker_open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="test-request-type"} 0 `, }, "with circuit breaker active, requestDuration higher than maxRequestDuration and no input error, finishRequest gives context deadline exceeded error": { @@ -287,6 +309,9 @@ func TestCircuitBreaker_FinishRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="success"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="error"} 1 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="circuit_breaker_open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="test-request-type"} 1 `, }, "with circuit breaker not active, requestDuration higher than maxRequestDuration and no input error, finishRequest does nothing": { @@ -300,6 +325,9 @@ func TestCircuitBreaker_FinishRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="success"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="error"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="circuit_breaker_open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="test-request-type"} 0 `, }, "with circuit breaker not active, requestDuration higher than maxRequestDuration and an input error relevant for circuit breakers, finishRequest does nothing": { @@ -313,6 +341,9 @@ func TestCircuitBreaker_FinishRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="success"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="error"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="circuit_breaker_open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="test-request-type"} 0 `, }, "with circuit breaker not active, requestDuration higher than maxRequestDuration and an input error irrelevant for circuit breakers, finishRequest does nothing": { @@ -326,6 +357,9 @@ func TestCircuitBreaker_FinishRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="success"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="error"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="circuit_breaker_open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="test-request-type"} 0 `, }, "with circuit breaker active, requestDuration higher than maxRequestDuration and an input error relevant for circuit breakers, finishRequest gives the input error": { @@ -339,6 +373,9 @@ func TestCircuitBreaker_FinishRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="success"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="error"} 1 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="circuit_breaker_open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="test-request-type"} 0 `, }, "with circuit breaker active, requestDuration higher than maxRequestDuration and an input error irrelevant for circuit breakers, finishRequest gives context deadline exceeded error": { @@ -352,6 +389,9 @@ func TestCircuitBreaker_FinishRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="success"} 0 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="error"} 1 cortex_ingester_circuit_breaker_results_total{request_type="test-request-type",result="circuit_breaker_open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="test-request-type"} 1 `, }, } @@ -381,15 +421,56 @@ func TestIngester_PushToStorage_CircuitBreaker(t *testing.T) { expectedErrorWhenCircuitBreakerClosed error pushRequestDelay time.Duration limits InstanceLimits + expectedMetrics string }{ "deadline exceeded": { expectedErrorWhenCircuitBreakerClosed: nil, limits: InstanceLimits{MaxInMemoryTenants: 3}, pushRequestDelay: pushTimeout, + expectedMetrics: ` + # HELP cortex_ingester_circuit_breaker_results_total Results of executing requests via the circuit breaker. + # TYPE cortex_ingester_circuit_breaker_results_total counter + cortex_ingester_circuit_breaker_results_total{request_type="push",result="circuit_breaker_open"} 2 + cortex_ingester_circuit_breaker_results_total{request_type="push",result="error"} 2 + cortex_ingester_circuit_breaker_results_total{request_type="push",result="success"} 1 + # HELP cortex_ingester_circuit_breaker_transitions_total Number of times the circuit breaker has entered a state. + # TYPE cortex_ingester_circuit_breaker_transitions_total counter + cortex_ingester_circuit_breaker_transitions_total{request_type="push",state="closed"} 0 + cortex_ingester_circuit_breaker_transitions_total{request_type="push",state="half-open"} 0 + cortex_ingester_circuit_breaker_transitions_total{request_type="push",state="open"} 1 + # HELP cortex_ingester_circuit_breaker_current_state Boolean set to 1 whenever the circuit breaker is in a state corresponding to the label name. + # TYPE cortex_ingester_circuit_breaker_current_state gauge + cortex_ingester_circuit_breaker_current_state{request_type="push",state="closed"} 0 + cortex_ingester_circuit_breaker_current_state{request_type="push",state="half-open"} 0 + cortex_ingester_circuit_breaker_current_state{request_type="push",state="open"} 1 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 2 + `, }, "instance limit hit": { expectedErrorWhenCircuitBreakerClosed: instanceLimitReachedError{}, limits: InstanceLimits{MaxInMemoryTenants: 1}, + expectedMetrics: ` + # HELP cortex_ingester_circuit_breaker_results_total Results of executing requests via the circuit breaker. + # TYPE cortex_ingester_circuit_breaker_results_total counter + cortex_ingester_circuit_breaker_results_total{request_type="push",result="circuit_breaker_open"} 2 + cortex_ingester_circuit_breaker_results_total{request_type="push",result="error"} 2 + cortex_ingester_circuit_breaker_results_total{request_type="push",result="success"} 1 + # HELP cortex_ingester_circuit_breaker_transitions_total Number of times the circuit breaker has entered a state. + # TYPE cortex_ingester_circuit_breaker_transitions_total counter + cortex_ingester_circuit_breaker_transitions_total{request_type="push",state="closed"} 0 + cortex_ingester_circuit_breaker_transitions_total{request_type="push",state="half-open"} 0 + cortex_ingester_circuit_breaker_transitions_total{request_type="push",state="open"} 1 + # HELP cortex_ingester_circuit_breaker_current_state Boolean set to 1 whenever the circuit breaker is in a state corresponding to the label name. + # TYPE cortex_ingester_circuit_breaker_current_state gauge + cortex_ingester_circuit_breaker_current_state{request_type="push",state="closed"} 0 + cortex_ingester_circuit_breaker_current_state{request_type="push",state="half-open"} 0 + cortex_ingester_circuit_breaker_current_state{request_type="push",state="open"} 1 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 0 + `, }, } @@ -401,6 +482,7 @@ func TestIngester_PushToStorage_CircuitBreaker(t *testing.T) { "cortex_ingester_circuit_breaker_results_total", "cortex_ingester_circuit_breaker_transitions_total", "cortex_ingester_circuit_breaker_current_state", + "cortex_ingester_circuit_breaker_request_timeouts_total", } registry := prometheus.NewRegistry() @@ -511,25 +593,12 @@ func TestIngester_PushToStorage_CircuitBreaker(t *testing.T) { cortex_ingester_circuit_breaker_current_state{request_type="push",state="closed"} 1 cortex_ingester_circuit_breaker_current_state{request_type="push",state="half-open"} 0 cortex_ingester_circuit_breaker_current_state{request_type="push",state="open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 0 ` } else { - expectedMetrics = ` - # HELP cortex_ingester_circuit_breaker_results_total Results of executing requests via the circuit breaker. - # TYPE cortex_ingester_circuit_breaker_results_total counter - cortex_ingester_circuit_breaker_results_total{request_type="push",result="circuit_breaker_open"} 2 - cortex_ingester_circuit_breaker_results_total{request_type="push",result="error"} 2 - cortex_ingester_circuit_breaker_results_total{request_type="push",result="success"} 1 - # HELP cortex_ingester_circuit_breaker_transitions_total Number of times the circuit breaker has entered a state. - # TYPE cortex_ingester_circuit_breaker_transitions_total counter - cortex_ingester_circuit_breaker_transitions_total{request_type="push",state="closed"} 0 - cortex_ingester_circuit_breaker_transitions_total{request_type="push",state="half-open"} 0 - cortex_ingester_circuit_breaker_transitions_total{request_type="push",state="open"} 1 - # HELP cortex_ingester_circuit_breaker_current_state Boolean set to 1 whenever the circuit breaker is in a state corresponding to the label name. - # TYPE cortex_ingester_circuit_breaker_current_state gauge - cortex_ingester_circuit_breaker_current_state{request_type="push",state="closed"} 0 - cortex_ingester_circuit_breaker_current_state{request_type="push",state="half-open"} 0 - cortex_ingester_circuit_breaker_current_state{request_type="push",state="open"} 1 - ` + expectedMetrics = testCase.expectedMetrics } assert.NoError(t, testutil.GatherAndCompare(registry, strings.NewReader(expectedMetrics), metricNames...)) }) @@ -593,6 +662,7 @@ func TestIngester_StartPushRequest_CircuitBreakerOpen(t *testing.T) { func TestIngester_FinishPushRequest(t *testing.T) { metricNames := []string{ "cortex_ingester_circuit_breaker_results_total", + "cortex_ingester_circuit_breaker_request_timeouts_total", } testCases := map[string]struct { pushRequestDuration time.Duration @@ -610,6 +680,9 @@ func TestIngester_FinishPushRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="push",result="circuit_breaker_open"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="error"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="success"} 1 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 0 `, }, "when a permit not acquired, pushRequestDuration lower than RequestTimeout and no input err, FinishPushRequest does nothing": { @@ -622,6 +695,9 @@ func TestIngester_FinishPushRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="push",result="circuit_breaker_open"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="error"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="success"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 0 `, }, "with a permit acquired, pushRequestDuration higher than RequestTimeout and no input error, FinishPushRequest records a failure": { @@ -634,6 +710,9 @@ func TestIngester_FinishPushRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="push",result="circuit_breaker_open"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="error"} 1 cortex_ingester_circuit_breaker_results_total{request_type="push",result="success"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 1 `, }, "with a permit not acquired, pushRequestDuration higher than RequestTimeout and no input error, FinishPushRequest does nothing": { @@ -646,6 +725,9 @@ func TestIngester_FinishPushRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="push",result="circuit_breaker_open"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="error"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="success"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 0 `, }, "with a permit acquired, pushRequestDuration higher than RequestTimeout and an input error relevant for the circuit breakers, FinishPushRequest records a failure": { @@ -658,6 +740,9 @@ func TestIngester_FinishPushRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="push",result="circuit_breaker_open"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="error"} 1 cortex_ingester_circuit_breaker_results_total{request_type="push",result="success"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 0 `, }, "with a permit acquired, pushRequestDuration higher than RequestTimeout and an input error irrelevant for the circuit breakers, FinishPushRequest records a failure": { @@ -670,6 +755,9 @@ func TestIngester_FinishPushRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="push",result="circuit_breaker_open"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="error"} 1 cortex_ingester_circuit_breaker_results_total{request_type="push",result="success"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 1 `, }, "with a permit not acquired, pushRequestDuration higher than RequestTimeout and an input error relevant for the circuit breakers, FinishPushRequest does nothing": { @@ -682,6 +770,9 @@ func TestIngester_FinishPushRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="push",result="circuit_breaker_open"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="error"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="success"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 0 `, }, "with a permit not acquired, pushRequestDuration higher than RequestTimeout and an input error irrelevant for the circuit breakers, FinishPushRequest does nothing": { @@ -694,6 +785,9 @@ func TestIngester_FinishPushRequest(t *testing.T) { cortex_ingester_circuit_breaker_results_total{request_type="push",result="circuit_breaker_open"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="error"} 0 cortex_ingester_circuit_breaker_results_total{request_type="push",result="success"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 0 `, }, } @@ -745,6 +839,7 @@ func TestIngester_Push_CircuitBreaker_DeadlineExceeded(t *testing.T) { "cortex_ingester_circuit_breaker_results_total", "cortex_ingester_circuit_breaker_transitions_total", "cortex_ingester_circuit_breaker_current_state", + "cortex_ingester_circuit_breaker_request_timeouts_total", } registry := prometheus.NewRegistry() @@ -861,6 +956,9 @@ func TestIngester_Push_CircuitBreaker_DeadlineExceeded(t *testing.T) { cortex_ingester_circuit_breaker_current_state{request_type="push",state="closed"} 1 cortex_ingester_circuit_breaker_current_state{request_type="push",state="half-open"} 0 cortex_ingester_circuit_breaker_current_state{request_type="push",state="open"} 0 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 0 ` } else { expectedMetrics = ` @@ -879,6 +977,9 @@ func TestIngester_Push_CircuitBreaker_DeadlineExceeded(t *testing.T) { cortex_ingester_circuit_breaker_current_state{request_type="push",state="closed"} 0 cortex_ingester_circuit_breaker_current_state{request_type="push",state="half-open"} 0 cortex_ingester_circuit_breaker_current_state{request_type="push",state="open"} 1 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 2 ` } assert.NoError(t, testutil.GatherAndCompare(registry, strings.NewReader(expectedMetrics), metricNames...)) @@ -957,11 +1058,16 @@ func TestPRCircuitBreaker_NewPRCircuitBreaker(t *testing.T) { cortex_ingester_circuit_breaker_current_state{request_type="read",state="half-open"} 0 cortex_ingester_circuit_breaker_current_state{request_type="push",state="closed"} 1 cortex_ingester_circuit_breaker_current_state{request_type="read",state="closed"} 1 + # HELP cortex_ingester_circuit_breaker_request_timeouts_total Number of times the circuit breaker recorded a request that reached timeout. + # TYPE cortex_ingester_circuit_breaker_request_timeouts_total counter + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="push"} 0 + cortex_ingester_circuit_breaker_request_timeouts_total{request_type="read"} 0 ` metricNames := []string{ "cortex_ingester_circuit_breaker_results_total", "cortex_ingester_circuit_breaker_transitions_total", "cortex_ingester_circuit_breaker_current_state", + "cortex_ingester_circuit_breaker_request_timeouts_total", } assert.NoError(t, testutil.GatherAndCompare(registerer, strings.NewReader(expectedMetrics), metricNames...)) } diff --git a/pkg/ingester/client/chunkcompat.go b/pkg/ingester/client/chunkcompat.go index f34521a4a4a..d3eedaf43d3 100644 --- a/pkg/ingester/client/chunkcompat.go +++ b/pkg/ingester/client/chunkcompat.go @@ -15,14 +15,13 @@ import ( "github.com/grafana/mimir/pkg/mimirpb" "github.com/grafana/mimir/pkg/storage/chunk" - "github.com/grafana/mimir/pkg/util" "github.com/grafana/mimir/pkg/util/modelutil" ) // StreamsToMatrix converts a slice of QueryStreamResponse to a model.Matrix. func StreamsToMatrix(from, through model.Time, responses []*QueryStreamResponse) (model.Matrix, error) { result := model.Matrix{} - streamingSeries := []labels.Labels{} + streamingSeries := [][]mimirpb.LabelAdapter{} haveReachedEndOfStreamingSeriesLabels := false for _, response := range responses { @@ -44,7 +43,7 @@ func StreamsToMatrix(from, through model.Time, responses []*QueryStreamResponse) return nil, errors.New("received series labels after IsEndOfSeriesStream=true") } - streamingSeries = append(streamingSeries, mimirpb.FromLabelAdaptersToLabels(s.Labels)) + streamingSeries = append(streamingSeries, s.Labels) } if response.IsEndOfSeriesStream { @@ -56,7 +55,7 @@ func StreamsToMatrix(from, through model.Time, responses []*QueryStreamResponse) return nil, errors.New("received series chunks before IsEndOfSeriesStream=true") } - series, err := seriesChunksToMatrix(from, through, streamingSeries[s.SeriesIndex], s.Chunks) + series, err := seriesChunksToMatrix(from, through, mimirpb.FromLabelAdaptersToLabels(streamingSeries[s.SeriesIndex]), mimirpb.FromLabelAdaptersToMetric(streamingSeries[s.SeriesIndex]), s.Chunks) if err != nil { return nil, err } @@ -83,7 +82,7 @@ func StreamingSeriesToMatrix(from, through model.Time, sSeries []StreamingSeries } chunks = append(chunks, sourceChunks...) } - stream, err := seriesChunksToMatrix(from, through, series.Labels, chunks) + stream, err := seriesChunksToMatrix(from, through, series.Labels, fromLabelsToMetric(series.Labels), chunks) if err != nil { return nil, err } @@ -92,6 +91,14 @@ func StreamingSeriesToMatrix(from, through model.Time, sSeries []StreamingSeries return result, nil } +func fromLabelsToMetric(ls labels.Labels) model.Metric { + m := make(model.Metric, 16) + ls.Range(func(l labels.Label) { + m[model.LabelName(l.Name)] = model.LabelValue(l.Value) + }) + return m +} + // TimeSeriesChunksToMatrix converts slice of []client.TimeSeriesChunk to a model.Matrix. func TimeSeriesChunksToMatrix(from, through model.Time, serieses []TimeSeriesChunk) (model.Matrix, error) { if serieses == nil { @@ -100,7 +107,7 @@ func TimeSeriesChunksToMatrix(from, through model.Time, serieses []TimeSeriesChu result := model.Matrix{} for _, series := range serieses { - stream, err := seriesChunksToMatrix(from, through, mimirpb.FromLabelAdaptersToLabels(series.Labels), series.Chunks) + stream, err := seriesChunksToMatrix(from, through, mimirpb.FromLabelAdaptersToLabels(series.Labels), mimirpb.FromLabelAdaptersToMetric(series.Labels), series.Chunks) if err != nil { return nil, err } @@ -150,9 +157,8 @@ func TimeseriesToMatrix(from, through model.Time, series []mimirpb.TimeSeries) ( return result, nil } -func seriesChunksToMatrix(from, through model.Time, l labels.Labels, c []Chunk) (*model.SampleStream, error) { - metric := util.LabelsToMetric(l) - chunks, err := FromChunks(l, c) +func seriesChunksToMatrix(from, through model.Time, lbls labels.Labels, metric model.Metric, c []Chunk) (*model.SampleStream, error) { + chunks, err := FromChunks(lbls, c) if err != nil { return nil, err } diff --git a/pkg/ingester/client/compat.go b/pkg/ingester/client/compat.go index 57973692e05..f79b1010d89 100644 --- a/pkg/ingester/client/compat.go +++ b/pkg/ingester/client/compat.go @@ -223,13 +223,3 @@ func FromLabelMatchers(matchers []*LabelMatcher) ([]*labels.Matcher, error) { } return result, nil } - -// LabelsToKeyString is used to form a string to be used as -// the hashKey. Don't print, use l.String() for printing. -func LabelsToKeyString(l labels.Labels) string { - // We are allocating 1024, even though most series are less than 600b long. - // But this is not an issue as this function is being inlined when called in a loop - // and buffer allocated is a static buffer and not a dynamic buffer on the heap. - b := make([]byte, 0, 1024) - return string(l.Bytes(b)) -} diff --git a/pkg/ingester/client/compat_test.go b/pkg/ingester/client/compat_test.go index 8bfbac5a5be..4f133464195 100644 --- a/pkg/ingester/client/compat_test.go +++ b/pkg/ingester/client/compat_test.go @@ -7,7 +7,6 @@ package client import ( "reflect" - "strconv" "testing" "github.com/prometheus/common/model" @@ -86,53 +85,3 @@ func TestLabelNamesRequest(t *testing.T) { assert.Equal(t, int64(maxt), actualMaxT) assert.Equal(t, matchers, actualMatchers) } - -// The main usecase for `LabelsToKeyString` is to generate hashKeys -// for maps. We are benchmarking that here. -func BenchmarkSeriesMap(b *testing.B) { - benchmarkSeriesMap(100000, b) -} - -func benchmarkSeriesMap(numSeries int, b *testing.B) { - series := makeSeries(numSeries) - sm := make(map[string]int, numSeries) - - b.ReportAllocs() - b.ResetTimer() - for n := 0; n < b.N; n++ { - for i, s := range series { - sm[LabelsToKeyString(s)] = i - } - - for _, s := range series { - _, ok := sm[LabelsToKeyString(s)] - if !ok { - b.Fatal("element missing") - } - } - - if len(sm) != numSeries { - b.Fatal("the number of series expected:", numSeries, "got:", len(sm)) - } - } -} - -func makeSeries(n int) []labels.Labels { - series := make([]labels.Labels, 0, n) - for i := 0; i < n; i++ { - series = append(series, labels.FromMap(map[string]string{ - "label0": "value0", - "label1": "value1", - "label2": "value2", - "label3": "value3", - "label4": "value4", - "label5": "value5", - "label6": "value6", - "label7": "value7", - "label8": "value8", - "label9": strconv.Itoa(i), - })) - } - - return series -} diff --git a/pkg/ingester/client/ingester.pb.go b/pkg/ingester/client/ingester.pb.go index 5439674039c..687fa15e022 100644 --- a/pkg/ingester/client/ingester.pb.go +++ b/pkg/ingester/client/ingester.pb.go @@ -85,54 +85,6 @@ func (MatchType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_60f6df4f3586b478, []int{1} } -type ReadRequest_ResponseType int32 - -const ( - SAMPLES ReadRequest_ResponseType = 0 - STREAMED_XOR_CHUNKS ReadRequest_ResponseType = 1 -) - -var ReadRequest_ResponseType_name = map[int32]string{ - 0: "SAMPLES", - 1: "STREAMED_XOR_CHUNKS", -} - -var ReadRequest_ResponseType_value = map[string]int32{ - "SAMPLES": 0, - "STREAMED_XOR_CHUNKS": 1, -} - -func (ReadRequest_ResponseType) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{6, 0} -} - -type StreamChunk_Encoding int32 - -const ( - UNKNOWN StreamChunk_Encoding = 0 - XOR StreamChunk_Encoding = 1 - HISTOGRAM StreamChunk_Encoding = 2 - FLOAT_HISTOGRAM StreamChunk_Encoding = 3 -) - -var StreamChunk_Encoding_name = map[int32]string{ - 0: "UNKNOWN", - 1: "XOR", - 2: "HISTOGRAM", - 3: "FLOAT_HISTOGRAM", -} - -var StreamChunk_Encoding_value = map[string]int32{ - "UNKNOWN": 0, - "XOR": 1, - "HISTOGRAM": 2, - "FLOAT_HISTOGRAM": 3, -} - -func (StreamChunk_Encoding) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{10, 0} -} - type ActiveSeriesRequest_RequestType int32 const ( @@ -151,7 +103,7 @@ var ActiveSeriesRequest_RequestType_value = map[string]int32{ } func (ActiveSeriesRequest_RequestType) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{13, 0} + return fileDescriptor_60f6df4f3586b478, []int{8, 0} } type LabelNamesAndValuesRequest struct { @@ -452,255 +404,6 @@ func (m *LabelValueSeriesCount) GetLabelValueSeries() map[string]uint64 { return nil } -type ReadRequest struct { - Queries []*QueryRequest `protobuf:"bytes,1,rep,name=queries,proto3" json:"queries,omitempty"` - AcceptedResponseTypes []ReadRequest_ResponseType `protobuf:"varint,2,rep,packed,name=accepted_response_types,json=acceptedResponseTypes,proto3,enum=cortex.ReadRequest_ResponseType" json:"accepted_response_types,omitempty"` -} - -func (m *ReadRequest) Reset() { *m = ReadRequest{} } -func (*ReadRequest) ProtoMessage() {} -func (*ReadRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{6} -} -func (m *ReadRequest) XXX_Unmarshal(b []byte) error { - return m.Unmarshal(b) -} -func (m *ReadRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - if deterministic { - return xxx_messageInfo_ReadRequest.Marshal(b, m, deterministic) - } else { - b = b[:cap(b)] - n, err := m.MarshalToSizedBuffer(b) - if err != nil { - return nil, err - } - return b[:n], nil - } -} -func (m *ReadRequest) XXX_Merge(src proto.Message) { - xxx_messageInfo_ReadRequest.Merge(m, src) -} -func (m *ReadRequest) XXX_Size() int { - return m.Size() -} -func (m *ReadRequest) XXX_DiscardUnknown() { - xxx_messageInfo_ReadRequest.DiscardUnknown(m) -} - -var xxx_messageInfo_ReadRequest proto.InternalMessageInfo - -func (m *ReadRequest) GetQueries() []*QueryRequest { - if m != nil { - return m.Queries - } - return nil -} - -func (m *ReadRequest) GetAcceptedResponseTypes() []ReadRequest_ResponseType { - if m != nil { - return m.AcceptedResponseTypes - } - return nil -} - -type ReadResponse struct { - Results []*QueryResponse `protobuf:"bytes,1,rep,name=results,proto3" json:"results,omitempty"` -} - -func (m *ReadResponse) Reset() { *m = ReadResponse{} } -func (*ReadResponse) ProtoMessage() {} -func (*ReadResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{7} -} -func (m *ReadResponse) XXX_Unmarshal(b []byte) error { - return m.Unmarshal(b) -} -func (m *ReadResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - if deterministic { - return xxx_messageInfo_ReadResponse.Marshal(b, m, deterministic) - } else { - b = b[:cap(b)] - n, err := m.MarshalToSizedBuffer(b) - if err != nil { - return nil, err - } - return b[:n], nil - } -} -func (m *ReadResponse) XXX_Merge(src proto.Message) { - xxx_messageInfo_ReadResponse.Merge(m, src) -} -func (m *ReadResponse) XXX_Size() int { - return m.Size() -} -func (m *ReadResponse) XXX_DiscardUnknown() { - xxx_messageInfo_ReadResponse.DiscardUnknown(m) -} - -var xxx_messageInfo_ReadResponse proto.InternalMessageInfo - -func (m *ReadResponse) GetResults() []*QueryResponse { - if m != nil { - return m.Results - } - return nil -} - -type StreamReadResponse struct { - ChunkedSeries []*StreamChunkedSeries `protobuf:"bytes,1,rep,name=chunked_series,json=chunkedSeries,proto3" json:"chunked_series,omitempty"` - QueryIndex int64 `protobuf:"varint,2,opt,name=query_index,json=queryIndex,proto3" json:"query_index,omitempty"` -} - -func (m *StreamReadResponse) Reset() { *m = StreamReadResponse{} } -func (*StreamReadResponse) ProtoMessage() {} -func (*StreamReadResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{8} -} -func (m *StreamReadResponse) XXX_Unmarshal(b []byte) error { - return m.Unmarshal(b) -} -func (m *StreamReadResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - if deterministic { - return xxx_messageInfo_StreamReadResponse.Marshal(b, m, deterministic) - } else { - b = b[:cap(b)] - n, err := m.MarshalToSizedBuffer(b) - if err != nil { - return nil, err - } - return b[:n], nil - } -} -func (m *StreamReadResponse) XXX_Merge(src proto.Message) { - xxx_messageInfo_StreamReadResponse.Merge(m, src) -} -func (m *StreamReadResponse) XXX_Size() int { - return m.Size() -} -func (m *StreamReadResponse) XXX_DiscardUnknown() { - xxx_messageInfo_StreamReadResponse.DiscardUnknown(m) -} - -var xxx_messageInfo_StreamReadResponse proto.InternalMessageInfo - -func (m *StreamReadResponse) GetChunkedSeries() []*StreamChunkedSeries { - if m != nil { - return m.ChunkedSeries - } - return nil -} - -func (m *StreamReadResponse) GetQueryIndex() int64 { - if m != nil { - return m.QueryIndex - } - return 0 -} - -type StreamChunkedSeries struct { - Labels []github_com_grafana_mimir_pkg_mimirpb.LabelAdapter `protobuf:"bytes,1,rep,name=labels,proto3,customtype=github.com/grafana/mimir/pkg/mimirpb.LabelAdapter" json:"labels"` - Chunks []StreamChunk `protobuf:"bytes,2,rep,name=chunks,proto3" json:"chunks"` -} - -func (m *StreamChunkedSeries) Reset() { *m = StreamChunkedSeries{} } -func (*StreamChunkedSeries) ProtoMessage() {} -func (*StreamChunkedSeries) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{9} -} -func (m *StreamChunkedSeries) XXX_Unmarshal(b []byte) error { - return m.Unmarshal(b) -} -func (m *StreamChunkedSeries) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - if deterministic { - return xxx_messageInfo_StreamChunkedSeries.Marshal(b, m, deterministic) - } else { - b = b[:cap(b)] - n, err := m.MarshalToSizedBuffer(b) - if err != nil { - return nil, err - } - return b[:n], nil - } -} -func (m *StreamChunkedSeries) XXX_Merge(src proto.Message) { - xxx_messageInfo_StreamChunkedSeries.Merge(m, src) -} -func (m *StreamChunkedSeries) XXX_Size() int { - return m.Size() -} -func (m *StreamChunkedSeries) XXX_DiscardUnknown() { - xxx_messageInfo_StreamChunkedSeries.DiscardUnknown(m) -} - -var xxx_messageInfo_StreamChunkedSeries proto.InternalMessageInfo - -func (m *StreamChunkedSeries) GetChunks() []StreamChunk { - if m != nil { - return m.Chunks - } - return nil -} - -type StreamChunk struct { - MinTimeMs int64 `protobuf:"varint,1,opt,name=min_time_ms,json=minTimeMs,proto3" json:"min_time_ms,omitempty"` - MaxTimeMs int64 `protobuf:"varint,2,opt,name=max_time_ms,json=maxTimeMs,proto3" json:"max_time_ms,omitempty"` - Type StreamChunk_Encoding `protobuf:"varint,3,opt,name=type,proto3,enum=cortex.StreamChunk_Encoding" json:"type,omitempty"` - Data github_com_grafana_mimir_pkg_mimirpb.UnsafeByteSlice `protobuf:"bytes,4,opt,name=data,proto3,customtype=github.com/grafana/mimir/pkg/mimirpb.UnsafeByteSlice" json:"data"` -} - -func (m *StreamChunk) Reset() { *m = StreamChunk{} } -func (*StreamChunk) ProtoMessage() {} -func (*StreamChunk) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{10} -} -func (m *StreamChunk) XXX_Unmarshal(b []byte) error { - return m.Unmarshal(b) -} -func (m *StreamChunk) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - if deterministic { - return xxx_messageInfo_StreamChunk.Marshal(b, m, deterministic) - } else { - b = b[:cap(b)] - n, err := m.MarshalToSizedBuffer(b) - if err != nil { - return nil, err - } - return b[:n], nil - } -} -func (m *StreamChunk) XXX_Merge(src proto.Message) { - xxx_messageInfo_StreamChunk.Merge(m, src) -} -func (m *StreamChunk) XXX_Size() int { - return m.Size() -} -func (m *StreamChunk) XXX_DiscardUnknown() { - xxx_messageInfo_StreamChunk.DiscardUnknown(m) -} - -var xxx_messageInfo_StreamChunk proto.InternalMessageInfo - -func (m *StreamChunk) GetMinTimeMs() int64 { - if m != nil { - return m.MinTimeMs - } - return 0 -} - -func (m *StreamChunk) GetMaxTimeMs() int64 { - if m != nil { - return m.MaxTimeMs - } - return 0 -} - -func (m *StreamChunk) GetType() StreamChunk_Encoding { - if m != nil { - return m.Type - } - return UNKNOWN -} - type QueryRequest struct { StartTimestampMs int64 `protobuf:"varint,1,opt,name=start_timestamp_ms,json=startTimestampMs,proto3" json:"start_timestamp_ms,omitempty"` EndTimestampMs int64 `protobuf:"varint,2,opt,name=end_timestamp_ms,json=endTimestampMs,proto3" json:"end_timestamp_ms,omitempty"` @@ -712,7 +415,7 @@ type QueryRequest struct { func (m *QueryRequest) Reset() { *m = QueryRequest{} } func (*QueryRequest) ProtoMessage() {} func (*QueryRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{11} + return fileDescriptor_60f6df4f3586b478, []int{6} } func (m *QueryRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -778,7 +481,7 @@ type ExemplarQueryRequest struct { func (m *ExemplarQueryRequest) Reset() { *m = ExemplarQueryRequest{} } func (*ExemplarQueryRequest) ProtoMessage() {} func (*ExemplarQueryRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{12} + return fileDescriptor_60f6df4f3586b478, []int{7} } func (m *ExemplarQueryRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -836,7 +539,7 @@ type ActiveSeriesRequest struct { func (m *ActiveSeriesRequest) Reset() { *m = ActiveSeriesRequest{} } func (*ActiveSeriesRequest) ProtoMessage() {} func (*ActiveSeriesRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{13} + return fileDescriptor_60f6df4f3586b478, []int{8} } func (m *ActiveSeriesRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -886,7 +589,7 @@ type QueryResponse struct { func (m *QueryResponse) Reset() { *m = QueryResponse{} } func (*QueryResponse) ProtoMessage() {} func (*QueryResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{14} + return fileDescriptor_60f6df4f3586b478, []int{9} } func (m *QueryResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -941,7 +644,7 @@ type QueryStreamResponse struct { func (m *QueryStreamResponse) Reset() { *m = QueryStreamResponse{} } func (*QueryStreamResponse) ProtoMessage() {} func (*QueryStreamResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{15} + return fileDescriptor_60f6df4f3586b478, []int{10} } func (m *QueryStreamResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1013,7 +716,7 @@ type QueryStreamSeries struct { func (m *QueryStreamSeries) Reset() { *m = QueryStreamSeries{} } func (*QueryStreamSeries) ProtoMessage() {} func (*QueryStreamSeries) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{16} + return fileDescriptor_60f6df4f3586b478, []int{11} } func (m *QueryStreamSeries) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1057,7 +760,7 @@ type QueryStreamSeriesChunks struct { func (m *QueryStreamSeriesChunks) Reset() { *m = QueryStreamSeriesChunks{} } func (*QueryStreamSeriesChunks) ProtoMessage() {} func (*QueryStreamSeriesChunks) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{17} + return fileDescriptor_60f6df4f3586b478, []int{12} } func (m *QueryStreamSeriesChunks) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1107,7 +810,7 @@ type ExemplarQueryResponse struct { func (m *ExemplarQueryResponse) Reset() { *m = ExemplarQueryResponse{} } func (*ExemplarQueryResponse) ProtoMessage() {} func (*ExemplarQueryResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{18} + return fileDescriptor_60f6df4f3586b478, []int{13} } func (m *ExemplarQueryResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1153,7 +856,7 @@ type LabelValuesRequest struct { func (m *LabelValuesRequest) Reset() { *m = LabelValuesRequest{} } func (*LabelValuesRequest) ProtoMessage() {} func (*LabelValuesRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{19} + return fileDescriptor_60f6df4f3586b478, []int{14} } func (m *LabelValuesRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1217,7 +920,7 @@ type LabelValuesResponse struct { func (m *LabelValuesResponse) Reset() { *m = LabelValuesResponse{} } func (*LabelValuesResponse) ProtoMessage() {} func (*LabelValuesResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{20} + return fileDescriptor_60f6df4f3586b478, []int{15} } func (m *LabelValuesResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1262,7 +965,7 @@ type LabelNamesRequest struct { func (m *LabelNamesRequest) Reset() { *m = LabelNamesRequest{} } func (*LabelNamesRequest) ProtoMessage() {} func (*LabelNamesRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{21} + return fileDescriptor_60f6df4f3586b478, []int{16} } func (m *LabelNamesRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1319,7 +1022,7 @@ type LabelNamesResponse struct { func (m *LabelNamesResponse) Reset() { *m = LabelNamesResponse{} } func (*LabelNamesResponse) ProtoMessage() {} func (*LabelNamesResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{22} + return fileDescriptor_60f6df4f3586b478, []int{17} } func (m *LabelNamesResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1362,7 +1065,7 @@ type UserStatsRequest struct { func (m *UserStatsRequest) Reset() { *m = UserStatsRequest{} } func (*UserStatsRequest) ProtoMessage() {} func (*UserStatsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{23} + return fileDescriptor_60f6df4f3586b478, []int{18} } func (m *UserStatsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1408,7 +1111,7 @@ type UserStatsResponse struct { func (m *UserStatsResponse) Reset() { *m = UserStatsResponse{} } func (*UserStatsResponse) ProtoMessage() {} func (*UserStatsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{24} + return fileDescriptor_60f6df4f3586b478, []int{19} } func (m *UserStatsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1473,7 +1176,7 @@ type UserIDStatsResponse struct { func (m *UserIDStatsResponse) Reset() { *m = UserIDStatsResponse{} } func (*UserIDStatsResponse) ProtoMessage() {} func (*UserIDStatsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{25} + return fileDescriptor_60f6df4f3586b478, []int{20} } func (m *UserIDStatsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1523,7 +1226,7 @@ type UsersStatsResponse struct { func (m *UsersStatsResponse) Reset() { *m = UsersStatsResponse{} } func (*UsersStatsResponse) ProtoMessage() {} func (*UsersStatsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{26} + return fileDescriptor_60f6df4f3586b478, []int{21} } func (m *UsersStatsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1568,7 +1271,7 @@ type MetricsForLabelMatchersRequest struct { func (m *MetricsForLabelMatchersRequest) Reset() { *m = MetricsForLabelMatchersRequest{} } func (*MetricsForLabelMatchersRequest) ProtoMessage() {} func (*MetricsForLabelMatchersRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{27} + return fileDescriptor_60f6df4f3586b478, []int{22} } func (m *MetricsForLabelMatchersRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1625,7 +1328,7 @@ type MetricsForLabelMatchersResponse struct { func (m *MetricsForLabelMatchersResponse) Reset() { *m = MetricsForLabelMatchersResponse{} } func (*MetricsForLabelMatchersResponse) ProtoMessage() {} func (*MetricsForLabelMatchersResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{28} + return fileDescriptor_60f6df4f3586b478, []int{23} } func (m *MetricsForLabelMatchersResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1670,7 +1373,7 @@ type MetricsMetadataRequest struct { func (m *MetricsMetadataRequest) Reset() { *m = MetricsMetadataRequest{} } func (*MetricsMetadataRequest) ProtoMessage() {} func (*MetricsMetadataRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{29} + return fileDescriptor_60f6df4f3586b478, []int{24} } func (m *MetricsMetadataRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1727,7 +1430,7 @@ type MetricsMetadataResponse struct { func (m *MetricsMetadataResponse) Reset() { *m = MetricsMetadataResponse{} } func (*MetricsMetadataResponse) ProtoMessage() {} func (*MetricsMetadataResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{30} + return fileDescriptor_60f6df4f3586b478, []int{25} } func (m *MetricsMetadataResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1773,7 +1476,7 @@ type ActiveSeriesResponse struct { func (m *ActiveSeriesResponse) Reset() { *m = ActiveSeriesResponse{} } func (*ActiveSeriesResponse) ProtoMessage() {} func (*ActiveSeriesResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{31} + return fileDescriptor_60f6df4f3586b478, []int{26} } func (m *ActiveSeriesResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1826,7 +1529,7 @@ type TimeSeriesChunk struct { func (m *TimeSeriesChunk) Reset() { *m = TimeSeriesChunk{} } func (*TimeSeriesChunk) ProtoMessage() {} func (*TimeSeriesChunk) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{32} + return fileDescriptor_60f6df4f3586b478, []int{27} } func (m *TimeSeriesChunk) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1886,7 +1589,7 @@ type Chunk struct { func (m *Chunk) Reset() { *m = Chunk{} } func (*Chunk) ProtoMessage() {} func (*Chunk) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{33} + return fileDescriptor_60f6df4f3586b478, []int{28} } func (m *Chunk) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1943,7 +1646,7 @@ type LabelMatchers struct { func (m *LabelMatchers) Reset() { *m = LabelMatchers{} } func (*LabelMatchers) ProtoMessage() {} func (*LabelMatchers) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{34} + return fileDescriptor_60f6df4f3586b478, []int{29} } func (m *LabelMatchers) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1988,7 +1691,7 @@ type LabelMatcher struct { func (m *LabelMatcher) Reset() { *m = LabelMatcher{} } func (*LabelMatcher) ProtoMessage() {} func (*LabelMatcher) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{35} + return fileDescriptor_60f6df4f3586b478, []int{30} } func (m *LabelMatcher) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2048,7 +1751,7 @@ type TimeSeriesFile struct { func (m *TimeSeriesFile) Reset() { *m = TimeSeriesFile{} } func (*TimeSeriesFile) ProtoMessage() {} func (*TimeSeriesFile) Descriptor() ([]byte, []int) { - return fileDescriptor_60f6df4f3586b478, []int{36} + return fileDescriptor_60f6df4f3586b478, []int{31} } func (m *TimeSeriesFile) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2108,8 +1811,6 @@ func (m *TimeSeriesFile) GetData() []byte { func init() { proto.RegisterEnum("cortex.CountMethod", CountMethod_name, CountMethod_value) proto.RegisterEnum("cortex.MatchType", MatchType_name, MatchType_value) - proto.RegisterEnum("cortex.ReadRequest_ResponseType", ReadRequest_ResponseType_name, ReadRequest_ResponseType_value) - proto.RegisterEnum("cortex.StreamChunk_Encoding", StreamChunk_Encoding_name, StreamChunk_Encoding_value) proto.RegisterEnum("cortex.ActiveSeriesRequest_RequestType", ActiveSeriesRequest_RequestType_name, ActiveSeriesRequest_RequestType_value) proto.RegisterType((*LabelNamesAndValuesRequest)(nil), "cortex.LabelNamesAndValuesRequest") proto.RegisterType((*LabelNamesAndValuesResponse)(nil), "cortex.LabelNamesAndValuesResponse") @@ -2118,11 +1819,6 @@ func init() { proto.RegisterType((*LabelValuesCardinalityResponse)(nil), "cortex.LabelValuesCardinalityResponse") proto.RegisterType((*LabelValueSeriesCount)(nil), "cortex.LabelValueSeriesCount") proto.RegisterMapType((map[string]uint64)(nil), "cortex.LabelValueSeriesCount.LabelValueSeriesEntry") - proto.RegisterType((*ReadRequest)(nil), "cortex.ReadRequest") - proto.RegisterType((*ReadResponse)(nil), "cortex.ReadResponse") - proto.RegisterType((*StreamReadResponse)(nil), "cortex.StreamReadResponse") - proto.RegisterType((*StreamChunkedSeries)(nil), "cortex.StreamChunkedSeries") - proto.RegisterType((*StreamChunk)(nil), "cortex.StreamChunk") proto.RegisterType((*QueryRequest)(nil), "cortex.QueryRequest") proto.RegisterType((*ExemplarQueryRequest)(nil), "cortex.ExemplarQueryRequest") proto.RegisterType((*ActiveSeriesRequest)(nil), "cortex.ActiveSeriesRequest") @@ -2154,136 +1850,119 @@ func init() { func init() { proto.RegisterFile("ingester.proto", fileDescriptor_60f6df4f3586b478) } var fileDescriptor_60f6df4f3586b478 = []byte{ - // 2064 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x59, 0xcd, 0x6f, 0x1b, 0xc7, - 0x15, 0xe7, 0xf0, 0x43, 0x12, 0x1f, 0x29, 0x9a, 0x1a, 0x4a, 0x26, 0xb3, 0x8a, 0x29, 0x65, 0x0b, - 0x27, 0x6a, 0x9a, 0x50, 0xfe, 0x6a, 0xe0, 0xa4, 0x29, 0x02, 0x4a, 0xa2, 0x2d, 0xda, 0xa6, 0xa8, - 0x2c, 0xa9, 0xc4, 0x2d, 0x10, 0x2c, 0x96, 0xe4, 0x48, 0x5a, 0x88, 0xbb, 0x64, 0x76, 0x97, 0x81, - 0x94, 0x53, 0x81, 0x02, 0x3d, 0xf7, 0xd6, 0x4b, 0x51, 0xa0, 0xb7, 0xa2, 0xa7, 0xa2, 0x97, 0x5e, - 0x8a, 0x9e, 0x73, 0x09, 0xe0, 0x63, 0x50, 0xa0, 0x46, 0x2d, 0xf7, 0xd0, 0xde, 0x02, 0xf4, 0x1f, - 0x08, 0xe6, 0x63, 0x3f, 0xb9, 0xfa, 0x70, 0x10, 0xfb, 0x24, 0xce, 0xfb, 0x9a, 0xdf, 0x7b, 0xf3, - 0xe6, 0xbd, 0x37, 0x2b, 0x28, 0xe8, 0xe6, 0x01, 0xb1, 0x1d, 0x62, 0xd5, 0xc6, 0xd6, 0xc8, 0x19, - 0xe1, 0x99, 0xfe, 0xc8, 0x72, 0xc8, 0xb1, 0xf4, 0xee, 0x81, 0xee, 0x1c, 0x4e, 0x7a, 0xb5, 0xfe, - 0xc8, 0x58, 0x3f, 0x18, 0x1d, 0x8c, 0xd6, 0x19, 0xbb, 0x37, 0xd9, 0x67, 0x2b, 0xb6, 0x60, 0xbf, - 0xb8, 0x9a, 0x74, 0x23, 0x28, 0x6e, 0x69, 0xfb, 0x9a, 0xa9, 0xad, 0x1b, 0xba, 0xa1, 0x5b, 0xeb, - 0xe3, 0xa3, 0x03, 0xfe, 0x6b, 0xdc, 0xe3, 0x7f, 0xb9, 0x86, 0xfc, 0x1b, 0x04, 0xd2, 0x23, 0xad, - 0x47, 0x86, 0x3b, 0x9a, 0x41, 0xec, 0xba, 0x39, 0xf8, 0x44, 0x1b, 0x4e, 0x88, 0xad, 0x90, 0xcf, - 0x27, 0xc4, 0x76, 0xf0, 0x0d, 0x98, 0x33, 0x34, 0xa7, 0x7f, 0x48, 0x2c, 0xbb, 0x82, 0x56, 0x53, - 0x6b, 0xb9, 0x5b, 0x8b, 0x35, 0x0e, 0xad, 0xc6, 0xb4, 0x5a, 0x9c, 0xa9, 0x78, 0x52, 0xf8, 0x3d, - 0xc8, 0xf7, 0x47, 0x13, 0xd3, 0x51, 0x0d, 0xe2, 0x1c, 0x8e, 0x06, 0x95, 0xe4, 0x2a, 0x5a, 0x2b, - 0xdc, 0x2a, 0xb9, 0x5a, 0x9b, 0x94, 0xd7, 0x62, 0x2c, 0x25, 0xd7, 0xf7, 0x17, 0xf2, 0x36, 0x2c, - 0xc7, 0xe2, 0xb0, 0xc7, 0x23, 0xd3, 0x26, 0xf8, 0xc7, 0x90, 0xd1, 0x1d, 0x62, 0xb8, 0x28, 0x4a, - 0x21, 0x14, 0x42, 0x96, 0x4b, 0xc8, 0x5b, 0x90, 0x0b, 0x50, 0xf1, 0x35, 0x80, 0x21, 0x5d, 0xaa, - 0xa6, 0x66, 0x90, 0x0a, 0x5a, 0x45, 0x6b, 0x59, 0x25, 0x3b, 0x74, 0xb7, 0xc2, 0x57, 0x61, 0xe6, - 0x0b, 0x26, 0x58, 0x49, 0xae, 0xa6, 0xd6, 0xb2, 0x8a, 0x58, 0xc9, 0x7f, 0x46, 0x70, 0x2d, 0x60, - 0x66, 0x53, 0xb3, 0x06, 0xba, 0xa9, 0x0d, 0x75, 0xe7, 0xc4, 0x8d, 0xcd, 0x0a, 0xe4, 0x7c, 0xc3, - 0x1c, 0x58, 0x56, 0x01, 0xcf, 0xb2, 0x1d, 0x0a, 0x5e, 0xf2, 0x7b, 0x05, 0x2f, 0x75, 0xc9, 0xe0, - 0xed, 0x41, 0xf5, 0x2c, 0xac, 0x22, 0x7e, 0xb7, 0xc3, 0xf1, 0xbb, 0x36, 0x1d, 0xbf, 0x0e, 0xb1, - 0x74, 0x62, 0xb3, 0x2d, 0xdc, 0x48, 0x3e, 0x45, 0xb0, 0x14, 0x2b, 0x70, 0x51, 0x50, 0x35, 0xc0, - 0x9c, 0xcd, 0x82, 0xa9, 0xda, 0x4c, 0x53, 0xc4, 0xe0, 0xf6, 0xb9, 0x5b, 0x4f, 0x51, 0x1b, 0xa6, - 0x63, 0x9d, 0x28, 0xc5, 0x61, 0x84, 0x2c, 0x6d, 0x4e, 0x43, 0x63, 0xa2, 0xb8, 0x08, 0xa9, 0x23, - 0x72, 0x22, 0x30, 0xd1, 0x9f, 0x78, 0x11, 0x32, 0x0c, 0x07, 0xcb, 0xc5, 0xb4, 0xc2, 0x17, 0x1f, - 0x24, 0xef, 0x22, 0xf9, 0x6b, 0x04, 0x39, 0x85, 0x68, 0x03, 0xf7, 0x48, 0x6b, 0x30, 0xfb, 0xf9, - 0x84, 0x83, 0x8d, 0x64, 0xfb, 0xc7, 0x13, 0x62, 0xb9, 0x27, 0xaf, 0xb8, 0x42, 0xf8, 0x31, 0x94, - 0xb5, 0x7e, 0x9f, 0x8c, 0x1d, 0x32, 0x50, 0x2d, 0x11, 0x6a, 0xd5, 0x39, 0x19, 0x0b, 0x67, 0x0b, - 0xb7, 0x56, 0x5d, 0xfd, 0xc0, 0x2e, 0x35, 0xf7, 0x50, 0xba, 0x27, 0x63, 0xa2, 0x2c, 0xb9, 0x06, - 0x82, 0x54, 0x5b, 0xbe, 0x03, 0xf9, 0x20, 0x01, 0xe7, 0x60, 0xb6, 0x53, 0x6f, 0xed, 0x3e, 0x6a, - 0x74, 0x8a, 0x09, 0x5c, 0x86, 0x52, 0xa7, 0xab, 0x34, 0xea, 0xad, 0xc6, 0x96, 0xfa, 0xb8, 0xad, - 0xa8, 0x9b, 0xdb, 0x7b, 0x3b, 0x0f, 0x3b, 0x45, 0x24, 0x7f, 0x44, 0xb5, 0x34, 0xcf, 0x14, 0x5e, - 0x87, 0x59, 0x8b, 0xd8, 0x93, 0xa1, 0xe3, 0xfa, 0xb3, 0x14, 0xf1, 0x87, 0xcb, 0x29, 0xae, 0x94, - 0x7c, 0x02, 0xb8, 0xe3, 0x58, 0x44, 0x33, 0x42, 0x66, 0x36, 0xa0, 0xd0, 0x3f, 0x9c, 0x98, 0x47, - 0x64, 0xe0, 0x1e, 0x25, 0xb7, 0xb6, 0xec, 0x5a, 0xe3, 0x3a, 0x9b, 0x5c, 0x86, 0x1f, 0x86, 0x32, - 0xdf, 0x0f, 0x2e, 0xe9, 0x6d, 0xa1, 0x51, 0x3b, 0x51, 0x75, 0x73, 0x40, 0x8e, 0xd9, 0x51, 0xa4, - 0x14, 0x60, 0xa4, 0x26, 0xa5, 0xc8, 0x7f, 0x41, 0x50, 0x8a, 0xb1, 0x83, 0xf7, 0x61, 0x86, 0x1d, - 0x7e, 0xf4, 0xea, 0x8f, 0x7b, 0x3c, 0x57, 0x76, 0x35, 0xdd, 0xda, 0x78, 0xff, 0xab, 0xa7, 0x2b, - 0x89, 0x7f, 0x3e, 0x5d, 0xb9, 0x79, 0x99, 0x02, 0xc8, 0xf5, 0xea, 0x03, 0x6d, 0xec, 0x10, 0x4b, - 0x11, 0xd6, 0xf1, 0x4d, 0x98, 0x61, 0x88, 0xdd, 0x3c, 0x2d, 0xc5, 0x38, 0xb7, 0x91, 0xa6, 0xfb, - 0x28, 0x42, 0x50, 0xfe, 0x5d, 0x12, 0x72, 0x01, 0x2e, 0xae, 0x42, 0xce, 0xd0, 0x4d, 0xd5, 0xd1, - 0x0d, 0xa2, 0xb2, 0xab, 0x46, 0x7d, 0xcc, 0x1a, 0xba, 0xd9, 0xd5, 0x0d, 0xd2, 0xb2, 0x19, 0x5f, - 0x3b, 0xf6, 0xf8, 0x49, 0xc1, 0xd7, 0x8e, 0x05, 0xff, 0x06, 0xa4, 0x69, 0xf2, 0x88, 0x6b, 0xff, - 0x7a, 0x0c, 0x80, 0x5a, 0xc3, 0xec, 0x8f, 0x06, 0xba, 0x79, 0xa0, 0x30, 0x49, 0xbc, 0x0b, 0xe9, - 0x81, 0xe6, 0x68, 0x95, 0xf4, 0x2a, 0x5a, 0xcb, 0x6f, 0x7c, 0x28, 0xa2, 0x70, 0xe7, 0x52, 0x51, - 0xd8, 0x33, 0x6d, 0x6d, 0x9f, 0x6c, 0x9c, 0x38, 0xa4, 0x33, 0xd4, 0xfb, 0x44, 0x61, 0x96, 0xe4, - 0x2d, 0x98, 0x73, 0xf7, 0xa0, 0x49, 0xb7, 0xb7, 0xf3, 0x70, 0xa7, 0xfd, 0xe9, 0x4e, 0x31, 0x81, - 0x67, 0x21, 0xf5, 0xb8, 0xad, 0x14, 0x11, 0x9e, 0x87, 0xec, 0x76, 0xb3, 0xd3, 0x6d, 0xdf, 0x57, - 0xea, 0xad, 0x62, 0x12, 0x97, 0xe0, 0xca, 0xbd, 0x47, 0xed, 0x7a, 0x57, 0xf5, 0x89, 0x29, 0xf9, - 0x3f, 0x08, 0xf2, 0xc1, 0x2b, 0x83, 0xdf, 0x01, 0x6c, 0x3b, 0x9a, 0xe5, 0x30, 0xe7, 0x6d, 0x47, - 0x33, 0xc6, 0x7e, 0x84, 0x8a, 0x8c, 0xd3, 0x75, 0x19, 0x2d, 0x1b, 0xaf, 0x41, 0x91, 0x98, 0x83, - 0xb0, 0x2c, 0x8f, 0x56, 0x81, 0x98, 0x83, 0xa0, 0x64, 0xb0, 0xc6, 0xa6, 0x2e, 0x55, 0x63, 0x7f, - 0x0e, 0xcb, 0x36, 0x0b, 0xa8, 0x6e, 0x1e, 0xa8, 0xfc, 0x20, 0xd5, 0x1e, 0x65, 0xaa, 0xb6, 0xfe, - 0x25, 0xa9, 0x0c, 0x58, 0x8d, 0xa8, 0x78, 0x22, 0x2c, 0xec, 0xf6, 0x06, 0x15, 0xe8, 0xe8, 0x5f, - 0x92, 0x07, 0xe9, 0xb9, 0x74, 0x31, 0xa3, 0x64, 0x0e, 0x75, 0xd3, 0xb1, 0xe5, 0x3f, 0x22, 0x58, - 0x6c, 0x1c, 0x13, 0x63, 0x3c, 0xd4, 0xac, 0x57, 0xe2, 0xee, 0xcd, 0x29, 0x77, 0x97, 0xe2, 0xdc, - 0xb5, 0x7d, 0x7f, 0xe5, 0xbf, 0x23, 0x28, 0xd5, 0xfb, 0x8e, 0xfe, 0x85, 0xa8, 0x92, 0xdf, 0xbf, - 0xb5, 0xff, 0x4c, 0xa4, 0x27, 0x6f, 0xe9, 0x6f, 0xb9, 0xd2, 0x31, 0xc6, 0x6b, 0xe2, 0x2f, 0xab, - 0x70, 0x4c, 0x49, 0x7e, 0x8f, 0x56, 0x5a, 0x8f, 0x88, 0x01, 0x66, 0x3a, 0x0d, 0xa5, 0xc9, 0xca, - 0xd9, 0x32, 0x94, 0x77, 0xea, 0xdd, 0xe6, 0x27, 0x0d, 0x3f, 0x85, 0x54, 0xc1, 0x44, 0xf2, 0x43, - 0x98, 0x0f, 0xd5, 0x2a, 0xfc, 0x01, 0x00, 0x0b, 0x54, 0x5c, 0x99, 0x1e, 0xf7, 0x6a, 0x34, 0x5a, - 0x1c, 0x8b, 0xb8, 0xac, 0x01, 0x69, 0xf9, 0xff, 0x49, 0x28, 0x31, 0x6b, 0x6e, 0x91, 0x13, 0x36, - 0x3f, 0x82, 0x1c, 0xcf, 0x84, 0xa0, 0xd1, 0xb2, 0xeb, 0xa0, 0x6f, 0x32, 0x58, 0x04, 0x82, 0x1a, - 0x11, 0x50, 0xc9, 0x17, 0x01, 0x85, 0x1f, 0x40, 0xd1, 0x4f, 0x48, 0x61, 0x81, 0x9f, 0xed, 0x6b, - 0xa1, 0x6a, 0xcd, 0x31, 0x87, 0xcc, 0x5c, 0xf1, 0x14, 0x45, 0xb1, 0xbc, 0x03, 0x65, 0xdd, 0x56, - 0x69, 0x32, 0x8d, 0xf6, 0x85, 0x2d, 0x95, 0xcb, 0xb0, 0x12, 0x31, 0xa7, 0x94, 0x74, 0xbb, 0x61, - 0x0e, 0xda, 0xfb, 0x5c, 0x9e, 0x9b, 0xc4, 0x9f, 0x41, 0x39, 0x8a, 0x40, 0xdc, 0x8c, 0x4a, 0x86, - 0x01, 0x59, 0x39, 0x13, 0x88, 0xb8, 0x1e, 0x1c, 0xce, 0x52, 0x04, 0x0e, 0x67, 0xca, 0xbf, 0x47, - 0xb0, 0x30, 0xa5, 0xf8, 0xca, 0xea, 0xfa, 0x8a, 0x38, 0x5b, 0x95, 0x0d, 0x4c, 0x6e, 0xe3, 0x61, - 0x24, 0x36, 0x71, 0xc8, 0x3a, 0x94, 0xcf, 0x70, 0x0b, 0xbf, 0x01, 0x79, 0x11, 0x0e, 0xde, 0xb5, - 0x10, 0x2b, 0x0e, 0x39, 0x4e, 0x63, 0x6d, 0x0b, 0xff, 0x24, 0xd2, 0x36, 0xe6, 0xbd, 0x61, 0x2d, - 0xa6, 0x61, 0x74, 0x60, 0x29, 0x52, 0x2e, 0x7e, 0x80, 0xa4, 0xfe, 0x07, 0x02, 0x1c, 0x1c, 0x83, - 0xc5, 0xfd, 0xbe, 0x60, 0x44, 0x8b, 0xaf, 0x50, 0xc9, 0x17, 0xa8, 0x50, 0xa9, 0x0b, 0x2b, 0x14, - 0x4d, 0xb9, 0x4b, 0x54, 0xa8, 0xbb, 0x50, 0x0a, 0xe1, 0x17, 0x31, 0x79, 0x03, 0xf2, 0x81, 0x21, - 0xd2, 0x1d, 0xb0, 0x73, 0xfe, 0x24, 0x68, 0xcb, 0x7f, 0x40, 0xb0, 0xe0, 0xbf, 0x1a, 0x5e, 0x6d, - 0xf1, 0xbd, 0x94, 0x6b, 0x3f, 0x15, 0x47, 0x23, 0xf0, 0x09, 0xcf, 0x2e, 0x7a, 0x39, 0xc8, 0x0f, - 0xa0, 0xb8, 0x67, 0x13, 0xab, 0xe3, 0x68, 0x8e, 0xe7, 0x55, 0xf4, 0x6d, 0x80, 0x2e, 0xf9, 0x36, - 0xf8, 0x1b, 0x82, 0x85, 0x80, 0x31, 0x01, 0xe1, 0xba, 0xfb, 0xe4, 0xd4, 0x47, 0xa6, 0x6a, 0x69, - 0x0e, 0xcf, 0x10, 0xa4, 0xcc, 0x7b, 0x54, 0x45, 0x73, 0x08, 0x4d, 0x22, 0x73, 0x62, 0xf8, 0x03, - 0x3c, 0x4d, 0xff, 0xac, 0x39, 0x71, 0xef, 0xf0, 0x3b, 0x80, 0xb5, 0xb1, 0xae, 0x46, 0x2c, 0xa5, - 0x98, 0xa5, 0xa2, 0x36, 0xd6, 0x9b, 0x21, 0x63, 0x35, 0x28, 0x59, 0x93, 0x21, 0x89, 0x8a, 0xa7, - 0x99, 0xf8, 0x02, 0x65, 0x85, 0xe4, 0xe5, 0xcf, 0xa0, 0x44, 0x81, 0x37, 0xb7, 0xc2, 0xd0, 0xcb, - 0x30, 0x3b, 0xb1, 0x89, 0xa5, 0xea, 0x03, 0x91, 0xd5, 0x33, 0x74, 0xd9, 0x1c, 0xe0, 0x77, 0xc5, - 0x30, 0x94, 0x64, 0x67, 0xe3, 0x15, 0xcf, 0x29, 0xe7, 0xc5, 0xa4, 0x73, 0x1f, 0x30, 0x65, 0xd9, - 0x61, 0xeb, 0x37, 0x21, 0x63, 0x53, 0x42, 0x74, 0xc4, 0x8d, 0x41, 0xa2, 0x70, 0x49, 0xf9, 0xaf, - 0x08, 0xaa, 0x2d, 0xe2, 0x58, 0x7a, 0xdf, 0xbe, 0x37, 0xb2, 0xc2, 0xa9, 0xf0, 0x92, 0x53, 0xf2, - 0x2e, 0xe4, 0xdd, 0x5c, 0x53, 0x6d, 0xe2, 0x9c, 0x3f, 0x13, 0xe4, 0x5c, 0xd1, 0x0e, 0x71, 0xe4, - 0x87, 0xb0, 0x72, 0x26, 0x66, 0x11, 0x8a, 0x35, 0x98, 0x31, 0x98, 0x88, 0x88, 0x45, 0xd1, 0x2f, - 0x48, 0x5c, 0x55, 0x11, 0x7c, 0x79, 0x0c, 0x57, 0x85, 0xb1, 0x16, 0x71, 0x34, 0x1a, 0x5d, 0xd7, - 0xf1, 0x45, 0xc8, 0x0c, 0x75, 0x43, 0x77, 0x98, 0xaf, 0x0b, 0x0a, 0x5f, 0x50, 0x07, 0xd9, 0x0f, - 0x75, 0x4c, 0x2c, 0x55, 0xec, 0x91, 0x64, 0x02, 0x05, 0x46, 0xdf, 0x25, 0x16, 0xb7, 0x47, 0x9f, - 0xe7, 0x82, 0x9f, 0xe2, 0x67, 0x2d, 0x76, 0x6c, 0x43, 0x79, 0x6a, 0x47, 0x01, 0xfb, 0x0e, 0xcc, - 0x19, 0x82, 0x26, 0x80, 0x57, 0xa2, 0xc0, 0x3d, 0x1d, 0x4f, 0x52, 0xee, 0xc3, 0x62, 0x78, 0x90, - 0x79, 0xd1, 0x20, 0xd0, 0x7a, 0xd5, 0x9b, 0xf4, 0x8f, 0x88, 0xe3, 0x75, 0x9a, 0x14, 0x6d, 0x16, - 0x9c, 0xc6, 0x5b, 0xcd, 0xff, 0x10, 0x5c, 0x89, 0x4c, 0x13, 0x34, 0x16, 0xfb, 0xd6, 0xc8, 0x50, - 0xdd, 0x2f, 0x40, 0x7e, 0x5e, 0x17, 0x28, 0xbd, 0x29, 0xc8, 0xcd, 0x41, 0x30, 0xf1, 0x93, 0xa1, - 0xc4, 0xf7, 0x5b, 0x69, 0xea, 0xa5, 0xb6, 0x52, 0xbf, 0xd7, 0xa5, 0x2f, 0xee, 0x75, 0x5f, 0x23, - 0xc8, 0x70, 0x0f, 0x5f, 0x56, 0xf2, 0x4b, 0x30, 0x47, 0xc4, 0x53, 0x85, 0x65, 0x47, 0x46, 0xf1, - 0xd6, 0x2f, 0xe1, 0x61, 0x54, 0x87, 0xf9, 0xd0, 0x35, 0x79, 0xf1, 0x01, 0x5a, 0x56, 0x21, 0x1f, - 0xe4, 0xe0, 0xeb, 0x62, 0xa0, 0xe6, 0xa5, 0x7c, 0xc1, 0xd5, 0x66, 0x6c, 0x7f, 0x74, 0xc6, 0x18, - 0xd2, 0xac, 0x87, 0xf3, 0x43, 0x67, 0xbf, 0xfd, 0x6f, 0x1a, 0xfc, 0x5a, 0xf0, 0x85, 0xfc, 0x6b, - 0x04, 0x05, 0x3f, 0xbf, 0xee, 0xe9, 0x43, 0xf2, 0x43, 0xa4, 0x97, 0x04, 0x73, 0xfb, 0xfa, 0x90, - 0x30, 0x0c, 0x7c, 0x3b, 0x6f, 0x4d, 0xb1, 0xf9, 0x71, 0xe6, 0x91, 0x7a, 0x7b, 0x0d, 0x72, 0x81, - 0x6e, 0x44, 0xdf, 0x8b, 0xcd, 0x1d, 0xb5, 0xd5, 0x68, 0xb5, 0x95, 0x5f, 0x14, 0x13, 0x74, 0xf2, - 0xaf, 0x6f, 0xd2, 0x69, 0xbf, 0x88, 0xde, 0x7e, 0x00, 0x59, 0xcf, 0x59, 0x9c, 0x85, 0x4c, 0xe3, - 0xe3, 0xbd, 0xfa, 0xa3, 0x62, 0x82, 0xaa, 0xec, 0xb4, 0xbb, 0x2a, 0x5f, 0x22, 0x7c, 0x05, 0x72, - 0x4a, 0xe3, 0x7e, 0xe3, 0xb1, 0xda, 0xaa, 0x77, 0x37, 0xb7, 0x8b, 0x49, 0x8c, 0xa1, 0xc0, 0x09, - 0x3b, 0x6d, 0x41, 0x4b, 0xdd, 0xfa, 0xd7, 0x2c, 0xcc, 0xb9, 0xde, 0xe0, 0xf7, 0x21, 0xbd, 0x3b, - 0xb1, 0x0f, 0xf1, 0x55, 0xff, 0x26, 0x7c, 0x6a, 0xe9, 0x0e, 0x11, 0x65, 0x49, 0x2a, 0x4f, 0xd1, - 0xf9, 0x75, 0x97, 0x13, 0x78, 0x0b, 0x72, 0x81, 0x71, 0x10, 0xc7, 0x7e, 0x01, 0x92, 0x96, 0x63, - 0x06, 0x62, 0xdf, 0xc6, 0x0d, 0x84, 0xdb, 0x50, 0x60, 0x2c, 0x77, 0xdc, 0xb3, 0xb1, 0xf7, 0x9c, - 0x8f, 0x7b, 0x30, 0x4a, 0xd7, 0xce, 0xe0, 0x7a, 0xb0, 0xb6, 0xc3, 0x5f, 0x35, 0xa5, 0xb8, 0x0f, - 0xa0, 0x51, 0x70, 0x31, 0x53, 0x95, 0x9c, 0xc0, 0x0d, 0x00, 0x7f, 0x26, 0xc1, 0xaf, 0x85, 0x84, - 0x83, 0x73, 0x94, 0x24, 0xc5, 0xb1, 0x3c, 0x33, 0x1b, 0x90, 0xf5, 0x3a, 0x2b, 0xae, 0xc4, 0x34, - 0x5b, 0x6e, 0xe4, 0xec, 0x36, 0x2c, 0x27, 0xf0, 0x3d, 0xc8, 0xd7, 0x87, 0xc3, 0xcb, 0x98, 0x91, - 0x82, 0x1c, 0x3b, 0x6a, 0x67, 0xe8, 0x75, 0x83, 0x68, 0x33, 0xc3, 0x6f, 0x7a, 0xb7, 0xea, 0xdc, - 0x0e, 0x2d, 0xbd, 0x75, 0xa1, 0x9c, 0xb7, 0x5b, 0x17, 0xae, 0x44, 0x7a, 0x0f, 0xae, 0x46, 0xb4, - 0x23, 0x6d, 0x50, 0x5a, 0x39, 0x93, 0xef, 0x59, 0xed, 0x89, 0x29, 0x38, 0xfc, 0x01, 0x1c, 0xcb, - 0xd3, 0x87, 0x10, 0xfd, 0x4a, 0x2f, 0xfd, 0xe8, 0x5c, 0x99, 0x40, 0x56, 0x1e, 0xc1, 0xd5, 0xf8, - 0xef, 0xc4, 0xf8, 0x7a, 0x4c, 0xce, 0x4c, 0x7f, 0xf3, 0x96, 0xde, 0xbc, 0x48, 0x2c, 0xb0, 0x59, - 0x0b, 0xf2, 0xc1, 0x8e, 0x8a, 0x97, 0xcf, 0xf9, 0x60, 0x20, 0xbd, 0x1e, 0xcf, 0xf4, 0xcd, 0x6d, - 0x7c, 0xf8, 0xe4, 0x59, 0x35, 0xf1, 0xcd, 0xb3, 0x6a, 0xe2, 0xdb, 0x67, 0x55, 0xf4, 0xab, 0xd3, - 0x2a, 0xfa, 0xd3, 0x69, 0x15, 0x7d, 0x75, 0x5a, 0x45, 0x4f, 0x4e, 0xab, 0xe8, 0xdf, 0xa7, 0x55, - 0xf4, 0xdf, 0xd3, 0x6a, 0xe2, 0xdb, 0xd3, 0x2a, 0xfa, 0xed, 0xf3, 0x6a, 0xe2, 0xc9, 0xf3, 0x6a, - 0xe2, 0x9b, 0xe7, 0xd5, 0xc4, 0x2f, 0x67, 0xfa, 0x43, 0x9d, 0x98, 0x4e, 0x6f, 0x86, 0xfd, 0xbb, - 0xe3, 0xf6, 0x77, 0x01, 0x00, 0x00, 0xff, 0xff, 0x17, 0xc5, 0xfb, 0x24, 0x69, 0x19, 0x00, 0x00, + // 1780 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x58, 0xcd, 0x6f, 0x1b, 0xc7, + 0x15, 0xe7, 0xf0, 0x2b, 0xe2, 0x23, 0x45, 0x53, 0x43, 0xd9, 0x64, 0x56, 0xd1, 0x4a, 0xd9, 0xc2, + 0x09, 0x9b, 0x26, 0x92, 0xbf, 0x1a, 0x38, 0x69, 0x8a, 0x82, 0x92, 0x69, 0x8b, 0x4e, 0x28, 0x29, + 0x4b, 0x29, 0xfd, 0x00, 0x82, 0xc5, 0x92, 0x1c, 0x4a, 0x0b, 0xed, 0x2e, 0xd9, 0xdd, 0x61, 0x60, + 0xe5, 0x54, 0xa0, 0x40, 0xcf, 0xfd, 0x03, 0x8a, 0x02, 0xbd, 0x15, 0x3d, 0xf6, 0xd2, 0x4b, 0xd1, + 0x73, 0x2e, 0x05, 0x7c, 0x6b, 0x50, 0xa0, 0x46, 0x2d, 0x5f, 0xda, 0x5b, 0x80, 0xfe, 0x03, 0xc5, + 0xce, 0xcc, 0x7e, 0x92, 0x92, 0xa8, 0x20, 0xf6, 0x89, 0x3b, 0xef, 0x6b, 0x7e, 0xef, 0xcd, 0x9b, + 0xf7, 0x1e, 0x07, 0xca, 0x86, 0x7d, 0x44, 0x5c, 0x4a, 0x9c, 0x8d, 0xb1, 0x33, 0xa2, 0x23, 0x9c, + 0xef, 0x8f, 0x1c, 0x4a, 0x9e, 0x48, 0xef, 0x1d, 0x19, 0xf4, 0x78, 0xd2, 0xdb, 0xe8, 0x8f, 0xac, + 0xcd, 0xa3, 0xd1, 0xd1, 0x68, 0x93, 0xb1, 0x7b, 0x93, 0x21, 0x5b, 0xb1, 0x05, 0xfb, 0xe2, 0x6a, + 0xd2, 0xad, 0xa8, 0xb8, 0xa3, 0x0f, 0x75, 0x5b, 0xdf, 0xb4, 0x0c, 0xcb, 0x70, 0x36, 0xc7, 0x27, + 0x47, 0xfc, 0x6b, 0xdc, 0xe3, 0xbf, 0x5c, 0x43, 0xf9, 0x0d, 0x02, 0xe9, 0x13, 0xbd, 0x47, 0xcc, + 0x5d, 0xdd, 0x22, 0x6e, 0xd3, 0x1e, 0x7c, 0xa6, 0x9b, 0x13, 0xe2, 0xaa, 0xe4, 0x97, 0x13, 0xe2, + 0x52, 0x7c, 0x0b, 0x16, 0x2c, 0x9d, 0xf6, 0x8f, 0x89, 0xe3, 0xd6, 0xd1, 0x7a, 0xa6, 0x51, 0xbc, + 0xb3, 0xbc, 0xc1, 0xa1, 0x6d, 0x30, 0xad, 0x0e, 0x67, 0xaa, 0x81, 0x14, 0x7e, 0x1f, 0x4a, 0xfd, + 0xd1, 0xc4, 0xa6, 0x9a, 0x45, 0xe8, 0xf1, 0x68, 0x50, 0x4f, 0xaf, 0xa3, 0x46, 0xf9, 0x4e, 0xd5, + 0xd7, 0xda, 0xf6, 0x78, 0x1d, 0xc6, 0x52, 0x8b, 0xfd, 0x70, 0xa1, 0xec, 0xc0, 0xca, 0x4c, 0x1c, + 0xee, 0x78, 0x64, 0xbb, 0x04, 0x7f, 0x1f, 0x72, 0x06, 0x25, 0x96, 0x8f, 0xa2, 0x1a, 0x43, 0x21, + 0x64, 0xb9, 0x84, 0xf2, 0x00, 0x8a, 0x11, 0x2a, 0x5e, 0x05, 0x30, 0xbd, 0xa5, 0x66, 0xeb, 0x16, + 0xa9, 0xa3, 0x75, 0xd4, 0x28, 0xa8, 0x05, 0xd3, 0xdf, 0x0a, 0xdf, 0x80, 0xfc, 0x17, 0x4c, 0xb0, + 0x9e, 0x5e, 0xcf, 0x34, 0x0a, 0xaa, 0x58, 0x29, 0x7f, 0x42, 0xb0, 0x1a, 0x31, 0xb3, 0xad, 0x3b, + 0x03, 0xc3, 0xd6, 0x4d, 0x83, 0x9e, 0xfa, 0xb1, 0x59, 0x83, 0x62, 0x68, 0x98, 0x03, 0x2b, 0xa8, + 0x10, 0x58, 0x76, 0x63, 0xc1, 0x4b, 0x7f, 0xab, 0xe0, 0x65, 0xe6, 0x0c, 0xde, 0x21, 0xc8, 0xe7, + 0x61, 0x15, 0xf1, 0xbb, 0x1b, 0x8f, 0xdf, 0xea, 0x74, 0xfc, 0xba, 0xc4, 0x31, 0x88, 0xcb, 0xb6, + 0xf0, 0x23, 0xf9, 0x0c, 0xc1, 0xf5, 0x99, 0x02, 0x97, 0x05, 0x55, 0x07, 0xcc, 0xd9, 0x2c, 0x98, + 0x9a, 0xcb, 0x34, 0x45, 0x0c, 0xee, 0x5e, 0xb8, 0xf5, 0x14, 0xb5, 0x65, 0x53, 0xe7, 0x54, 0xad, + 0x98, 0x09, 0xb2, 0xb4, 0x3d, 0x0d, 0x8d, 0x89, 0xe2, 0x0a, 0x64, 0x4e, 0xc8, 0xa9, 0xc0, 0xe4, + 0x7d, 0xe2, 0x65, 0xc8, 0x31, 0x1c, 0x2c, 0x17, 0xb3, 0x2a, 0x5f, 0x7c, 0x98, 0xbe, 0x8f, 0x94, + 0x7f, 0x20, 0x28, 0x7d, 0x3a, 0x21, 0x4e, 0x70, 0xa6, 0xef, 0x02, 0x76, 0xa9, 0xee, 0x50, 0x8d, + 0x1a, 0x16, 0x71, 0xa9, 0x6e, 0x8d, 0x35, 0x16, 0x33, 0xd4, 0xc8, 0xa8, 0x15, 0xc6, 0x39, 0xf0, + 0x19, 0x1d, 0x17, 0x37, 0xa0, 0x42, 0xec, 0x41, 0x5c, 0x36, 0xcd, 0x64, 0xcb, 0xc4, 0x1e, 0x44, + 0x25, 0xa3, 0xa9, 0x90, 0x99, 0x2b, 0x15, 0x7e, 0x0c, 0x2b, 0x2e, 0x75, 0x88, 0x6e, 0x19, 0xf6, + 0x91, 0xd6, 0x3f, 0x9e, 0xd8, 0x27, 0xae, 0xd6, 0xf3, 0x98, 0x9a, 0x6b, 0x7c, 0x49, 0xea, 0x03, + 0xe6, 0x4a, 0x3d, 0x10, 0xd9, 0x66, 0x12, 0x5b, 0x9e, 0x40, 0xd7, 0xf8, 0x92, 0x28, 0x7f, 0x40, + 0xb0, 0xdc, 0x7a, 0x42, 0xac, 0xb1, 0xa9, 0x3b, 0xaf, 0xc4, 0xc3, 0xdb, 0x53, 0x1e, 0x5e, 0x9f, + 0xe5, 0xa1, 0x1b, 0xba, 0xa8, 0xfc, 0x15, 0x41, 0xb5, 0xd9, 0xa7, 0xc6, 0x17, 0xe2, 0xfc, 0xbe, + 0x7d, 0xd1, 0xf9, 0x11, 0x64, 0xe9, 0xe9, 0x98, 0x88, 0x62, 0xf3, 0xb6, 0x2f, 0x3d, 0xc3, 0xf8, + 0x86, 0xf8, 0x3d, 0x38, 0x1d, 0x13, 0x95, 0x29, 0x29, 0xef, 0x43, 0x31, 0x42, 0xc4, 0x00, 0xf9, + 0x6e, 0x4b, 0x6d, 0xb7, 0xba, 0x95, 0x14, 0x5e, 0x81, 0xda, 0x6e, 0xf3, 0xa0, 0xfd, 0x59, 0x4b, + 0xdb, 0x69, 0x77, 0x0f, 0xf6, 0x1e, 0xa9, 0xcd, 0x8e, 0x26, 0x98, 0x48, 0xf9, 0x18, 0x16, 0x45, + 0x64, 0xc5, 0x1d, 0xfb, 0x10, 0x80, 0x05, 0x8a, 0x67, 0x7b, 0x1c, 0xf9, 0xb8, 0xb7, 0xe1, 0x45, + 0x8b, 0x63, 0xd9, 0xca, 0x7e, 0xf5, 0x6c, 0x2d, 0xa5, 0x46, 0xa4, 0x95, 0xff, 0xa5, 0xa1, 0xca, + 0xac, 0x75, 0xd9, 0x89, 0x06, 0x36, 0x7f, 0x02, 0x45, 0x7e, 0xf8, 0x51, 0xa3, 0x35, 0xdf, 0xc1, + 0xd0, 0x24, 0x3b, 0x7f, 0x61, 0x37, 0xaa, 0x91, 0x00, 0x95, 0xbe, 0x0a, 0x28, 0xfc, 0x18, 0x2a, + 0x61, 0x0e, 0x0a, 0x0b, 0xfc, 0x6c, 0x5f, 0xf7, 0x11, 0x44, 0x30, 0xc7, 0xcc, 0x5c, 0x0b, 0x14, + 0x39, 0x19, 0xdf, 0x83, 0x9a, 0xe1, 0x6a, 0x5e, 0x32, 0x8d, 0x86, 0xc2, 0x96, 0xc6, 0x65, 0xea, + 0xd9, 0x75, 0xd4, 0x58, 0x50, 0xab, 0x86, 0xdb, 0xb2, 0x07, 0x7b, 0x43, 0x2e, 0xcf, 0x4d, 0xe2, + 0xcf, 0xa1, 0x96, 0x44, 0x20, 0x2e, 0x43, 0x3d, 0xc7, 0x80, 0xac, 0x9d, 0x0b, 0x44, 0xdc, 0x08, + 0x0e, 0xe7, 0x7a, 0x02, 0x0e, 0x67, 0x2a, 0xbf, 0x43, 0xb0, 0x34, 0xa5, 0x88, 0x87, 0x90, 0x67, + 0xe5, 0x26, 0xd9, 0x6c, 0xc6, 0x3d, 0x9e, 0x7f, 0xfb, 0xba, 0xe1, 0x6c, 0x7d, 0xe0, 0xd9, 0xfd, + 0xe7, 0xb3, 0xb5, 0xdb, 0xf3, 0xb4, 0x5c, 0xae, 0xd7, 0x1c, 0xe8, 0x63, 0x4a, 0x1c, 0x55, 0x58, + 0xf7, 0x1a, 0x08, 0xf3, 0x45, 0x63, 0xa5, 0x5c, 0xdc, 0x2b, 0x60, 0x24, 0x56, 0x0b, 0x15, 0x03, + 0x6a, 0xe7, 0xb8, 0x85, 0xdf, 0x84, 0x92, 0x08, 0x87, 0x61, 0x0f, 0xc8, 0x13, 0x76, 0x81, 0xb3, + 0x6a, 0x91, 0xd3, 0xda, 0x1e, 0x09, 0xff, 0x00, 0xf2, 0x22, 0x54, 0xfc, 0xd4, 0x17, 0x83, 0x36, + 0x12, 0xc9, 0x15, 0x21, 0xa2, 0x74, 0xe1, 0x7a, 0xa2, 0x5c, 0x7c, 0x07, 0x49, 0xfd, 0x37, 0x04, + 0x38, 0xda, 0xa0, 0xc5, 0xfd, 0xbe, 0xa4, 0x79, 0xcc, 0xae, 0x50, 0xe9, 0x2b, 0x54, 0xa8, 0xcc, + 0xa5, 0x15, 0xca, 0x4b, 0xb9, 0x39, 0x2a, 0xd4, 0x7d, 0xa8, 0xc6, 0xf0, 0x8b, 0x98, 0xbc, 0x09, + 0xa5, 0x48, 0x7b, 0xf3, 0x5b, 0x7f, 0x31, 0xec, 0x51, 0xae, 0xf2, 0x7b, 0x04, 0x4b, 0xe1, 0x3c, + 0xf3, 0x6a, 0x8b, 0xef, 0x5c, 0xae, 0xfd, 0x50, 0x1c, 0x8d, 0xc0, 0x27, 0x3c, 0xbb, 0x6c, 0xa6, + 0x51, 0x1e, 0x43, 0xe5, 0xd0, 0x25, 0x4e, 0x97, 0xea, 0x34, 0xf0, 0x2a, 0x39, 0xb5, 0xa0, 0x39, + 0xa7, 0x96, 0xbf, 0x20, 0x58, 0x8a, 0x18, 0x13, 0x10, 0x6e, 0xfa, 0xc3, 0xb0, 0x31, 0xb2, 0x35, + 0x47, 0xa7, 0x3c, 0x43, 0x90, 0xba, 0x18, 0x50, 0x55, 0x9d, 0x12, 0x2f, 0x89, 0xec, 0x89, 0x15, + 0x8e, 0x16, 0x5e, 0xfa, 0x17, 0xec, 0x89, 0x7f, 0x87, 0xdf, 0x05, 0xac, 0x8f, 0x0d, 0x2d, 0x61, + 0x29, 0xc3, 0x2c, 0x55, 0xf4, 0xb1, 0xd1, 0x8e, 0x19, 0xdb, 0x80, 0xaa, 0x33, 0x31, 0x49, 0x52, + 0x3c, 0xcb, 0xc4, 0x97, 0x3c, 0x56, 0x4c, 0x5e, 0xf9, 0x1c, 0xaa, 0x1e, 0xf0, 0xf6, 0x83, 0x38, + 0xf4, 0x1a, 0xbc, 0x36, 0x71, 0x89, 0xa3, 0x19, 0x03, 0x91, 0xd5, 0x79, 0x6f, 0xd9, 0x1e, 0xe0, + 0xf7, 0x20, 0x3b, 0xd0, 0xa9, 0xce, 0x60, 0x46, 0x8a, 0xe7, 0x94, 0xf3, 0x2a, 0x13, 0x53, 0x1e, + 0x01, 0xf6, 0x58, 0x6e, 0xdc, 0xfa, 0x6d, 0xc8, 0xb9, 0x1e, 0x41, 0x5c, 0xc2, 0x95, 0xa8, 0x95, + 0x04, 0x12, 0x95, 0x4b, 0x2a, 0x7f, 0x46, 0x20, 0x77, 0x08, 0x75, 0x8c, 0xbe, 0xfb, 0x70, 0xe4, + 0xc4, 0x53, 0xe1, 0x25, 0xa7, 0xe4, 0x7d, 0x28, 0xf9, 0xb9, 0xa6, 0xb9, 0x84, 0x5e, 0x3c, 0x13, + 0x14, 0x7d, 0xd1, 0x2e, 0xa1, 0xca, 0xc7, 0xb0, 0x76, 0x2e, 0x66, 0x11, 0x8a, 0x06, 0xe4, 0x2d, + 0x26, 0x22, 0x62, 0x51, 0x09, 0x0b, 0x12, 0x57, 0x55, 0x05, 0x5f, 0x19, 0xc3, 0x0d, 0x61, 0xac, + 0x43, 0xa8, 0xee, 0x45, 0xd7, 0x77, 0x7c, 0x19, 0x72, 0xa6, 0x61, 0x19, 0x94, 0xf9, 0xba, 0xa4, + 0xf2, 0x85, 0xe7, 0x20, 0xfb, 0xd0, 0xc6, 0xc4, 0xd1, 0xc4, 0x1e, 0x69, 0x26, 0x50, 0x66, 0xf4, + 0x7d, 0xe2, 0x70, 0x7b, 0xde, 0x1f, 0x07, 0xc1, 0xcf, 0xf0, 0xb3, 0x16, 0x3b, 0xee, 0x41, 0x6d, + 0x6a, 0x47, 0x01, 0xfb, 0x1e, 0x2c, 0x58, 0x82, 0x26, 0x80, 0xd7, 0x93, 0xc0, 0x03, 0x9d, 0x40, + 0x52, 0xe9, 0xc3, 0x72, 0x7c, 0x90, 0xb9, 0x6a, 0x10, 0xbc, 0x7a, 0xd5, 0x9b, 0xf4, 0x4f, 0x08, + 0x0d, 0x3a, 0x4d, 0xc6, 0x6b, 0x16, 0x9c, 0xc6, 0x5b, 0xcd, 0x7f, 0x11, 0x5c, 0x4b, 0x4c, 0x13, + 0x5e, 0x2c, 0x86, 0xce, 0xc8, 0xd2, 0xfc, 0xff, 0xa6, 0x61, 0x5e, 0x97, 0x3d, 0x7a, 0x5b, 0x90, + 0xdb, 0x83, 0x68, 0xe2, 0xa7, 0x63, 0x89, 0x1f, 0xb6, 0xd2, 0xcc, 0x4b, 0x6d, 0xa5, 0x61, 0xaf, + 0xcb, 0x5e, 0xde, 0xeb, 0xfe, 0x8e, 0x20, 0xc7, 0x3d, 0x7c, 0x59, 0xc9, 0x2f, 0xc1, 0x02, 0xb1, + 0xfb, 0xa3, 0x81, 0x61, 0x1f, 0xb1, 0xec, 0xc8, 0xa9, 0xc1, 0x1a, 0xef, 0x8b, 0x5a, 0xe0, 0x15, + 0x97, 0xd2, 0xd6, 0x47, 0xc2, 0xf7, 0x7b, 0x73, 0xf9, 0x7e, 0x68, 0xbb, 0xfa, 0x90, 0x6c, 0x9d, + 0x52, 0xd2, 0x35, 0x8d, 0xbe, 0x5f, 0x2e, 0x9a, 0xb0, 0x18, 0xbb, 0x26, 0x57, 0x1f, 0xa0, 0x15, + 0x0d, 0x4a, 0x51, 0x0e, 0xbe, 0x29, 0x06, 0x6a, 0x5e, 0xca, 0x97, 0x7c, 0x6d, 0xc6, 0x0e, 0x47, + 0x67, 0x8c, 0x21, 0xcb, 0x7a, 0x38, 0x3f, 0x74, 0xf6, 0x1d, 0xfe, 0xdb, 0xe2, 0xd7, 0x82, 0x2f, + 0x94, 0x5f, 0x23, 0x28, 0x87, 0xf9, 0xf5, 0xd0, 0x30, 0xc9, 0x77, 0x91, 0x5e, 0x12, 0x2c, 0x0c, + 0x0d, 0x93, 0x30, 0x0c, 0x7c, 0xbb, 0x60, 0xed, 0x61, 0x0b, 0xe3, 0xcc, 0x23, 0xf5, 0x4e, 0x03, + 0x8a, 0x91, 0x6e, 0x84, 0x17, 0xa1, 0xd0, 0xde, 0xd5, 0x3a, 0xad, 0xce, 0x9e, 0xfa, 0xf3, 0x4a, + 0xca, 0x9b, 0xfc, 0x9b, 0xdb, 0xde, 0xb4, 0x5f, 0x41, 0xef, 0x3c, 0x86, 0x42, 0xe0, 0x2c, 0x2e, + 0x40, 0xae, 0xf5, 0xe9, 0x61, 0xf3, 0x93, 0x4a, 0xca, 0x53, 0xd9, 0xdd, 0x3b, 0xd0, 0xf8, 0x12, + 0xe1, 0x6b, 0x50, 0x54, 0x5b, 0x8f, 0x5a, 0x3f, 0xd3, 0x3a, 0xcd, 0x83, 0xed, 0x9d, 0x4a, 0x1a, + 0x63, 0x28, 0x73, 0xc2, 0xee, 0x9e, 0xa0, 0x65, 0xee, 0xfc, 0xeb, 0x35, 0x58, 0xf0, 0xbd, 0xc1, + 0x1f, 0x40, 0x76, 0x7f, 0xe2, 0x1e, 0xe3, 0x1b, 0xe1, 0x4d, 0xf8, 0xa9, 0x63, 0x50, 0x22, 0xca, + 0x92, 0x54, 0x9b, 0xa2, 0xf3, 0xeb, 0xae, 0xa4, 0xf0, 0x03, 0x28, 0x46, 0xc6, 0x41, 0xbc, 0x1c, + 0x1b, 0x7d, 0x7d, 0xfd, 0x95, 0x19, 0x03, 0x71, 0x68, 0xe3, 0x16, 0xc2, 0x7b, 0x50, 0x66, 0x2c, + 0x7f, 0xdc, 0x73, 0xf1, 0x1b, 0xbe, 0xca, 0xac, 0x3f, 0x8c, 0xd2, 0xea, 0x39, 0xdc, 0x00, 0xd6, + 0x4e, 0xfc, 0xbd, 0x45, 0x9a, 0xf5, 0x34, 0x93, 0x04, 0x37, 0x63, 0xaa, 0x52, 0x52, 0xb8, 0x05, + 0x10, 0xce, 0x24, 0xf8, 0xf5, 0x98, 0x70, 0x74, 0x8e, 0x92, 0xa4, 0x59, 0xac, 0xc0, 0xcc, 0x16, + 0x14, 0x82, 0xce, 0x8a, 0xeb, 0x33, 0x9a, 0x2d, 0x37, 0x72, 0x7e, 0x1b, 0x56, 0x52, 0xf8, 0x21, + 0x94, 0x9a, 0xa6, 0x39, 0x8f, 0x19, 0x29, 0xca, 0x71, 0x93, 0x76, 0xcc, 0xa0, 0x1b, 0x24, 0x9b, + 0x19, 0x7e, 0x2b, 0xb8, 0x55, 0x17, 0x76, 0x68, 0xe9, 0xed, 0x4b, 0xe5, 0x82, 0xdd, 0x0e, 0xe0, + 0x5a, 0xa2, 0xf7, 0x60, 0x39, 0xa1, 0x9d, 0x68, 0x83, 0xd2, 0xda, 0xb9, 0xfc, 0xc0, 0x6a, 0x4f, + 0x4c, 0xc1, 0xf1, 0xa7, 0x39, 0xac, 0x4c, 0x1f, 0x42, 0xf2, 0xfd, 0x50, 0xfa, 0xde, 0x85, 0x32, + 0x91, 0xac, 0x3c, 0x81, 0x1b, 0xb3, 0x5f, 0xb0, 0xf0, 0xcd, 0x19, 0x39, 0x33, 0xfd, 0x1a, 0x27, + 0xbd, 0x75, 0x99, 0x58, 0x64, 0xb3, 0x0e, 0x94, 0xa2, 0x1d, 0x15, 0xaf, 0x5c, 0xf0, 0x60, 0x20, + 0xbd, 0x31, 0x9b, 0x19, 0x9a, 0xdb, 0xfa, 0xe8, 0xe9, 0x73, 0x39, 0xf5, 0xf5, 0x73, 0x39, 0xf5, + 0xcd, 0x73, 0x19, 0xfd, 0xea, 0x4c, 0x46, 0x7f, 0x3c, 0x93, 0xd1, 0x57, 0x67, 0x32, 0x7a, 0x7a, + 0x26, 0xa3, 0x7f, 0x9f, 0xc9, 0xe8, 0x3f, 0x67, 0x72, 0xea, 0x9b, 0x33, 0x19, 0xfd, 0xf6, 0x85, + 0x9c, 0x7a, 0xfa, 0x42, 0x4e, 0x7d, 0xfd, 0x42, 0x4e, 0xfd, 0x22, 0xdf, 0x37, 0x0d, 0x62, 0xd3, + 0x5e, 0x9e, 0x3d, 0xc4, 0xde, 0xfd, 0x7f, 0x00, 0x00, 0x00, 0xff, 0xff, 0xf8, 0x33, 0xc8, 0xcb, + 0x03, 0x16, 0x00, 0x00, } func (x CountMethod) String() string { @@ -2300,20 +1979,6 @@ func (x MatchType) String() string { } return strconv.Itoa(int(x)) } -func (x ReadRequest_ResponseType) String() string { - s, ok := ReadRequest_ResponseType_name[int32(x)] - if ok { - return s - } - return strconv.Itoa(int(x)) -} -func (x StreamChunk_Encoding) String() string { - s, ok := StreamChunk_Encoding_name[int32(x)] - if ok { - return s - } - return strconv.Itoa(int(x)) -} func (x ActiveSeriesRequest_RequestType) String() string { s, ok := ActiveSeriesRequest_RequestType_name[int32(x)] if ok { @@ -2515,14 +2180,14 @@ func (this *LabelValueSeriesCount) Equal(that interface{}) bool { } return true } -func (this *ReadRequest) Equal(that interface{}) bool { +func (this *QueryRequest) Equal(that interface{}) bool { if that == nil { return this == nil } - that1, ok := that.(*ReadRequest) + that1, ok := that.(*QueryRequest) if !ok { - that2, ok := that.(ReadRequest) + that2, ok := that.(QueryRequest) if ok { that1 = &that2 } else { @@ -2534,32 +2199,33 @@ func (this *ReadRequest) Equal(that interface{}) bool { } else if this == nil { return false } - if len(this.Queries) != len(that1.Queries) { + if this.StartTimestampMs != that1.StartTimestampMs { return false } - for i := range this.Queries { - if !this.Queries[i].Equal(that1.Queries[i]) { - return false - } + if this.EndTimestampMs != that1.EndTimestampMs { + return false } - if len(this.AcceptedResponseTypes) != len(that1.AcceptedResponseTypes) { + if len(this.Matchers) != len(that1.Matchers) { return false } - for i := range this.AcceptedResponseTypes { - if this.AcceptedResponseTypes[i] != that1.AcceptedResponseTypes[i] { + for i := range this.Matchers { + if !this.Matchers[i].Equal(that1.Matchers[i]) { return false } } + if this.StreamingChunksBatchSize != that1.StreamingChunksBatchSize { + return false + } return true } -func (this *ReadResponse) Equal(that interface{}) bool { +func (this *ExemplarQueryRequest) Equal(that interface{}) bool { if that == nil { return this == nil } - that1, ok := that.(*ReadResponse) + that1, ok := that.(*ExemplarQueryRequest) if !ok { - that2, ok := that.(ReadResponse) + that2, ok := that.(ExemplarQueryRequest) if ok { that1 = &that2 } else { @@ -2571,180 +2237,11 @@ func (this *ReadResponse) Equal(that interface{}) bool { } else if this == nil { return false } - if len(this.Results) != len(that1.Results) { + if this.StartTimestampMs != that1.StartTimestampMs { return false } - for i := range this.Results { - if !this.Results[i].Equal(that1.Results[i]) { - return false - } - } - return true -} -func (this *StreamReadResponse) Equal(that interface{}) bool { - if that == nil { - return this == nil - } - - that1, ok := that.(*StreamReadResponse) - if !ok { - that2, ok := that.(StreamReadResponse) - if ok { - that1 = &that2 - } else { - return false - } - } - if that1 == nil { - return this == nil - } else if this == nil { - return false - } - if len(this.ChunkedSeries) != len(that1.ChunkedSeries) { - return false - } - for i := range this.ChunkedSeries { - if !this.ChunkedSeries[i].Equal(that1.ChunkedSeries[i]) { - return false - } - } - if this.QueryIndex != that1.QueryIndex { - return false - } - return true -} -func (this *StreamChunkedSeries) Equal(that interface{}) bool { - if that == nil { - return this == nil - } - - that1, ok := that.(*StreamChunkedSeries) - if !ok { - that2, ok := that.(StreamChunkedSeries) - if ok { - that1 = &that2 - } else { - return false - } - } - if that1 == nil { - return this == nil - } else if this == nil { - return false - } - if len(this.Labels) != len(that1.Labels) { - return false - } - for i := range this.Labels { - if !this.Labels[i].Equal(that1.Labels[i]) { - return false - } - } - if len(this.Chunks) != len(that1.Chunks) { - return false - } - for i := range this.Chunks { - if !this.Chunks[i].Equal(&that1.Chunks[i]) { - return false - } - } - return true -} -func (this *StreamChunk) Equal(that interface{}) bool { - if that == nil { - return this == nil - } - - that1, ok := that.(*StreamChunk) - if !ok { - that2, ok := that.(StreamChunk) - if ok { - that1 = &that2 - } else { - return false - } - } - if that1 == nil { - return this == nil - } else if this == nil { - return false - } - if this.MinTimeMs != that1.MinTimeMs { - return false - } - if this.MaxTimeMs != that1.MaxTimeMs { - return false - } - if this.Type != that1.Type { - return false - } - if !this.Data.Equal(that1.Data) { - return false - } - return true -} -func (this *QueryRequest) Equal(that interface{}) bool { - if that == nil { - return this == nil - } - - that1, ok := that.(*QueryRequest) - if !ok { - that2, ok := that.(QueryRequest) - if ok { - that1 = &that2 - } else { - return false - } - } - if that1 == nil { - return this == nil - } else if this == nil { - return false - } - if this.StartTimestampMs != that1.StartTimestampMs { - return false - } - if this.EndTimestampMs != that1.EndTimestampMs { - return false - } - if len(this.Matchers) != len(that1.Matchers) { - return false - } - for i := range this.Matchers { - if !this.Matchers[i].Equal(that1.Matchers[i]) { - return false - } - } - if this.StreamingChunksBatchSize != that1.StreamingChunksBatchSize { - return false - } - return true -} -func (this *ExemplarQueryRequest) Equal(that interface{}) bool { - if that == nil { - return this == nil - } - - that1, ok := that.(*ExemplarQueryRequest) - if !ok { - that2, ok := that.(ExemplarQueryRequest) - if ok { - that1 = &that2 - } else { - return false - } - } - if that1 == nil { - return this == nil - } else if this == nil { - return false - } - if this.StartTimestampMs != that1.StartTimestampMs { - return false - } - if this.EndTimestampMs != that1.EndTimestampMs { - return false + if this.EndTimestampMs != that1.EndTimestampMs { + return false } if len(this.Matchers) != len(that1.Matchers) { return false @@ -3613,74 +3110,6 @@ func (this *LabelValueSeriesCount) GoString() string { s = append(s, "}") return strings.Join(s, "") } -func (this *ReadRequest) GoString() string { - if this == nil { - return "nil" - } - s := make([]string, 0, 6) - s = append(s, "&client.ReadRequest{") - if this.Queries != nil { - s = append(s, "Queries: "+fmt.Sprintf("%#v", this.Queries)+",\n") - } - s = append(s, "AcceptedResponseTypes: "+fmt.Sprintf("%#v", this.AcceptedResponseTypes)+",\n") - s = append(s, "}") - return strings.Join(s, "") -} -func (this *ReadResponse) GoString() string { - if this == nil { - return "nil" - } - s := make([]string, 0, 5) - s = append(s, "&client.ReadResponse{") - if this.Results != nil { - s = append(s, "Results: "+fmt.Sprintf("%#v", this.Results)+",\n") - } - s = append(s, "}") - return strings.Join(s, "") -} -func (this *StreamReadResponse) GoString() string { - if this == nil { - return "nil" - } - s := make([]string, 0, 6) - s = append(s, "&client.StreamReadResponse{") - if this.ChunkedSeries != nil { - s = append(s, "ChunkedSeries: "+fmt.Sprintf("%#v", this.ChunkedSeries)+",\n") - } - s = append(s, "QueryIndex: "+fmt.Sprintf("%#v", this.QueryIndex)+",\n") - s = append(s, "}") - return strings.Join(s, "") -} -func (this *StreamChunkedSeries) GoString() string { - if this == nil { - return "nil" - } - s := make([]string, 0, 6) - s = append(s, "&client.StreamChunkedSeries{") - s = append(s, "Labels: "+fmt.Sprintf("%#v", this.Labels)+",\n") - if this.Chunks != nil { - vs := make([]*StreamChunk, len(this.Chunks)) - for i := range vs { - vs[i] = &this.Chunks[i] - } - s = append(s, "Chunks: "+fmt.Sprintf("%#v", vs)+",\n") - } - s = append(s, "}") - return strings.Join(s, "") -} -func (this *StreamChunk) GoString() string { - if this == nil { - return "nil" - } - s := make([]string, 0, 8) - s = append(s, "&client.StreamChunk{") - s = append(s, "MinTimeMs: "+fmt.Sprintf("%#v", this.MinTimeMs)+",\n") - s = append(s, "MaxTimeMs: "+fmt.Sprintf("%#v", this.MaxTimeMs)+",\n") - s = append(s, "Type: "+fmt.Sprintf("%#v", this.Type)+",\n") - s = append(s, "Data: "+fmt.Sprintf("%#v", this.Data)+",\n") - s = append(s, "}") - return strings.Join(s, "") -} func (this *QueryRequest) GoString() string { if this == nil { return "nil" @@ -4907,7 +4336,7 @@ func (m *LabelValueSeriesCount) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *ReadRequest) Marshal() (dAtA []byte, err error) { +func (m *QueryRequest) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -4917,38 +4346,27 @@ func (m *ReadRequest) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *ReadRequest) MarshalTo(dAtA []byte) (int, error) { +func (m *QueryRequest) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *ReadRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *QueryRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if len(m.AcceptedResponseTypes) > 0 { - dAtA2 := make([]byte, len(m.AcceptedResponseTypes)*10) - var j1 int - for _, num := range m.AcceptedResponseTypes { - for num >= 1<<7 { - dAtA2[j1] = uint8(uint64(num)&0x7f | 0x80) - num >>= 7 - j1++ - } - dAtA2[j1] = uint8(num) - j1++ - } - i -= j1 - copy(dAtA[i:], dAtA2[:j1]) - i = encodeVarintIngester(dAtA, i, uint64(j1)) + if m.StreamingChunksBatchSize != 0 { + i = encodeVarintIngester(dAtA, i, uint64(m.StreamingChunksBatchSize)) i-- - dAtA[i] = 0x12 + dAtA[i] = 0x6 + i-- + dAtA[i] = 0xa0 } - if len(m.Queries) > 0 { - for iNdEx := len(m.Queries) - 1; iNdEx >= 0; iNdEx-- { + if len(m.Matchers) > 0 { + for iNdEx := len(m.Matchers) - 1; iNdEx >= 0; iNdEx-- { { - size, err := m.Queries[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Matchers[iNdEx].MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -4956,13 +4374,23 @@ func (m *ReadRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintIngester(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0xa + dAtA[i] = 0x1a } } + if m.EndTimestampMs != 0 { + i = encodeVarintIngester(dAtA, i, uint64(m.EndTimestampMs)) + i-- + dAtA[i] = 0x10 + } + if m.StartTimestampMs != 0 { + i = encodeVarintIngester(dAtA, i, uint64(m.StartTimestampMs)) + i-- + dAtA[i] = 0x8 + } return len(dAtA) - i, nil } -func (m *ReadResponse) Marshal() (dAtA []byte, err error) { +func (m *ExemplarQueryRequest) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -4972,20 +4400,20 @@ func (m *ReadResponse) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *ReadResponse) MarshalTo(dAtA []byte) (int, error) { +func (m *ExemplarQueryRequest) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *ReadResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ExemplarQueryRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if len(m.Results) > 0 { - for iNdEx := len(m.Results) - 1; iNdEx >= 0; iNdEx-- { + if len(m.Matchers) > 0 { + for iNdEx := len(m.Matchers) - 1; iNdEx >= 0; iNdEx-- { { - size, err := m.Results[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Matchers[iNdEx].MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -4993,13 +4421,23 @@ func (m *ReadResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintIngester(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0xa + dAtA[i] = 0x1a } } + if m.EndTimestampMs != 0 { + i = encodeVarintIngester(dAtA, i, uint64(m.EndTimestampMs)) + i-- + dAtA[i] = 0x10 + } + if m.StartTimestampMs != 0 { + i = encodeVarintIngester(dAtA, i, uint64(m.StartTimestampMs)) + i-- + dAtA[i] = 0x8 + } return len(dAtA) - i, nil } -func (m *StreamReadResponse) Marshal() (dAtA []byte, err error) { +func (m *ActiveSeriesRequest) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -5009,25 +4447,25 @@ func (m *StreamReadResponse) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *StreamReadResponse) MarshalTo(dAtA []byte) (int, error) { +func (m *ActiveSeriesRequest) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *StreamReadResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ActiveSeriesRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if m.QueryIndex != 0 { - i = encodeVarintIngester(dAtA, i, uint64(m.QueryIndex)) + if m.Type != 0 { + i = encodeVarintIngester(dAtA, i, uint64(m.Type)) i-- dAtA[i] = 0x10 } - if len(m.ChunkedSeries) > 0 { - for iNdEx := len(m.ChunkedSeries) - 1; iNdEx >= 0; iNdEx-- { + if len(m.Matchers) > 0 { + for iNdEx := len(m.Matchers) - 1; iNdEx >= 0; iNdEx-- { { - size, err := m.ChunkedSeries[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Matchers[iNdEx].MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -5041,7 +4479,7 @@ func (m *StreamReadResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *StreamChunkedSeries) Marshal() (dAtA []byte, err error) { +func (m *QueryResponse) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -5051,254 +4489,12 @@ func (m *StreamChunkedSeries) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *StreamChunkedSeries) MarshalTo(dAtA []byte) (int, error) { +func (m *QueryResponse) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *StreamChunkedSeries) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if len(m.Chunks) > 0 { - for iNdEx := len(m.Chunks) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.Chunks[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintIngester(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x12 - } - } - if len(m.Labels) > 0 { - for iNdEx := len(m.Labels) - 1; iNdEx >= 0; iNdEx-- { - { - size := m.Labels[iNdEx].Size() - i -= size - if _, err := m.Labels[iNdEx].MarshalTo(dAtA[i:]); err != nil { - return 0, err - } - i = encodeVarintIngester(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0xa - } - } - return len(dAtA) - i, nil -} - -func (m *StreamChunk) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *StreamChunk) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *StreamChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - { - size := m.Data.Size() - i -= size - if _, err := m.Data.MarshalTo(dAtA[i:]); err != nil { - return 0, err - } - i = encodeVarintIngester(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x22 - if m.Type != 0 { - i = encodeVarintIngester(dAtA, i, uint64(m.Type)) - i-- - dAtA[i] = 0x18 - } - if m.MaxTimeMs != 0 { - i = encodeVarintIngester(dAtA, i, uint64(m.MaxTimeMs)) - i-- - dAtA[i] = 0x10 - } - if m.MinTimeMs != 0 { - i = encodeVarintIngester(dAtA, i, uint64(m.MinTimeMs)) - i-- - dAtA[i] = 0x8 - } - return len(dAtA) - i, nil -} - -func (m *QueryRequest) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *QueryRequest) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *QueryRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.StreamingChunksBatchSize != 0 { - i = encodeVarintIngester(dAtA, i, uint64(m.StreamingChunksBatchSize)) - i-- - dAtA[i] = 0x6 - i-- - dAtA[i] = 0xa0 - } - if len(m.Matchers) > 0 { - for iNdEx := len(m.Matchers) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.Matchers[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintIngester(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x1a - } - } - if m.EndTimestampMs != 0 { - i = encodeVarintIngester(dAtA, i, uint64(m.EndTimestampMs)) - i-- - dAtA[i] = 0x10 - } - if m.StartTimestampMs != 0 { - i = encodeVarintIngester(dAtA, i, uint64(m.StartTimestampMs)) - i-- - dAtA[i] = 0x8 - } - return len(dAtA) - i, nil -} - -func (m *ExemplarQueryRequest) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *ExemplarQueryRequest) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *ExemplarQueryRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if len(m.Matchers) > 0 { - for iNdEx := len(m.Matchers) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.Matchers[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintIngester(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x1a - } - } - if m.EndTimestampMs != 0 { - i = encodeVarintIngester(dAtA, i, uint64(m.EndTimestampMs)) - i-- - dAtA[i] = 0x10 - } - if m.StartTimestampMs != 0 { - i = encodeVarintIngester(dAtA, i, uint64(m.StartTimestampMs)) - i-- - dAtA[i] = 0x8 - } - return len(dAtA) - i, nil -} - -func (m *ActiveSeriesRequest) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *ActiveSeriesRequest) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *ActiveSeriesRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if m.Type != 0 { - i = encodeVarintIngester(dAtA, i, uint64(m.Type)) - i-- - dAtA[i] = 0x10 - } - if len(m.Matchers) > 0 { - for iNdEx := len(m.Matchers) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.Matchers[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintIngester(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0xa - } - } - return len(dAtA) - i, nil -} - -func (m *QueryResponse) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *QueryResponse) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *QueryResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *QueryResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -6026,20 +5222,20 @@ func (m *ActiveSeriesResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { var l int _ = l if len(m.BucketCount) > 0 { - dAtA7 := make([]byte, len(m.BucketCount)*10) - var j6 int + dAtA5 := make([]byte, len(m.BucketCount)*10) + var j4 int for _, num := range m.BucketCount { for num >= 1<<7 { - dAtA7[j6] = uint8(uint64(num)&0x7f | 0x80) + dAtA5[j4] = uint8(uint64(num)&0x7f | 0x80) num >>= 7 - j6++ + j4++ } - dAtA7[j6] = uint8(num) - j6++ + dAtA5[j4] = uint8(num) + j4++ } - i -= j6 - copy(dAtA[i:], dAtA7[:j6]) - i = encodeVarintIngester(dAtA, i, uint64(j6)) + i -= j4 + copy(dAtA[i:], dAtA5[:j4]) + i = encodeVarintIngester(dAtA, i, uint64(j4)) i-- dAtA[i] = 0x12 } @@ -6426,134 +5622,38 @@ func (m *LabelValueSeriesCount) Size() (n int) { return n } -func (m *ReadRequest) Size() (n int) { +func (m *QueryRequest) Size() (n int) { if m == nil { return 0 } var l int _ = l - if len(m.Queries) > 0 { - for _, e := range m.Queries { + if m.StartTimestampMs != 0 { + n += 1 + sovIngester(uint64(m.StartTimestampMs)) + } + if m.EndTimestampMs != 0 { + n += 1 + sovIngester(uint64(m.EndTimestampMs)) + } + if len(m.Matchers) > 0 { + for _, e := range m.Matchers { l = e.Size() n += 1 + l + sovIngester(uint64(l)) } } - if len(m.AcceptedResponseTypes) > 0 { - l = 0 - for _, e := range m.AcceptedResponseTypes { - l += sovIngester(uint64(e)) - } - n += 1 + sovIngester(uint64(l)) + l + if m.StreamingChunksBatchSize != 0 { + n += 2 + sovIngester(uint64(m.StreamingChunksBatchSize)) } return n } -func (m *ReadResponse) Size() (n int) { +func (m *ExemplarQueryRequest) Size() (n int) { if m == nil { return 0 } var l int _ = l - if len(m.Results) > 0 { - for _, e := range m.Results { - l = e.Size() - n += 1 + l + sovIngester(uint64(l)) - } - } - return n -} - -func (m *StreamReadResponse) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if len(m.ChunkedSeries) > 0 { - for _, e := range m.ChunkedSeries { - l = e.Size() - n += 1 + l + sovIngester(uint64(l)) - } - } - if m.QueryIndex != 0 { - n += 1 + sovIngester(uint64(m.QueryIndex)) - } - return n -} - -func (m *StreamChunkedSeries) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if len(m.Labels) > 0 { - for _, e := range m.Labels { - l = e.Size() - n += 1 + l + sovIngester(uint64(l)) - } - } - if len(m.Chunks) > 0 { - for _, e := range m.Chunks { - l = e.Size() - n += 1 + l + sovIngester(uint64(l)) - } - } - return n -} - -func (m *StreamChunk) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if m.MinTimeMs != 0 { - n += 1 + sovIngester(uint64(m.MinTimeMs)) - } - if m.MaxTimeMs != 0 { - n += 1 + sovIngester(uint64(m.MaxTimeMs)) - } - if m.Type != 0 { - n += 1 + sovIngester(uint64(m.Type)) - } - l = m.Data.Size() - n += 1 + l + sovIngester(uint64(l)) - return n -} - -func (m *QueryRequest) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if m.StartTimestampMs != 0 { - n += 1 + sovIngester(uint64(m.StartTimestampMs)) - } - if m.EndTimestampMs != 0 { - n += 1 + sovIngester(uint64(m.EndTimestampMs)) - } - if len(m.Matchers) > 0 { - for _, e := range m.Matchers { - l = e.Size() - n += 1 + l + sovIngester(uint64(l)) - } - } - if m.StreamingChunksBatchSize != 0 { - n += 2 + sovIngester(uint64(m.StreamingChunksBatchSize)) - } - return n -} - -func (m *ExemplarQueryRequest) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if m.StartTimestampMs != 0 { - n += 1 + sovIngester(uint64(m.StartTimestampMs)) + if m.StartTimestampMs != 0 { + n += 1 + sovIngester(uint64(m.StartTimestampMs)) } if m.EndTimestampMs != 0 { n += 1 + sovIngester(uint64(m.EndTimestampMs)) @@ -7126,82 +6226,6 @@ func (this *LabelValueSeriesCount) String() string { }, "") return s } -func (this *ReadRequest) String() string { - if this == nil { - return "nil" - } - repeatedStringForQueries := "[]*QueryRequest{" - for _, f := range this.Queries { - repeatedStringForQueries += strings.Replace(f.String(), "QueryRequest", "QueryRequest", 1) + "," - } - repeatedStringForQueries += "}" - s := strings.Join([]string{`&ReadRequest{`, - `Queries:` + repeatedStringForQueries + `,`, - `AcceptedResponseTypes:` + fmt.Sprintf("%v", this.AcceptedResponseTypes) + `,`, - `}`, - }, "") - return s -} -func (this *ReadResponse) String() string { - if this == nil { - return "nil" - } - repeatedStringForResults := "[]*QueryResponse{" - for _, f := range this.Results { - repeatedStringForResults += strings.Replace(f.String(), "QueryResponse", "QueryResponse", 1) + "," - } - repeatedStringForResults += "}" - s := strings.Join([]string{`&ReadResponse{`, - `Results:` + repeatedStringForResults + `,`, - `}`, - }, "") - return s -} -func (this *StreamReadResponse) String() string { - if this == nil { - return "nil" - } - repeatedStringForChunkedSeries := "[]*StreamChunkedSeries{" - for _, f := range this.ChunkedSeries { - repeatedStringForChunkedSeries += strings.Replace(f.String(), "StreamChunkedSeries", "StreamChunkedSeries", 1) + "," - } - repeatedStringForChunkedSeries += "}" - s := strings.Join([]string{`&StreamReadResponse{`, - `ChunkedSeries:` + repeatedStringForChunkedSeries + `,`, - `QueryIndex:` + fmt.Sprintf("%v", this.QueryIndex) + `,`, - `}`, - }, "") - return s -} -func (this *StreamChunkedSeries) String() string { - if this == nil { - return "nil" - } - repeatedStringForChunks := "[]StreamChunk{" - for _, f := range this.Chunks { - repeatedStringForChunks += strings.Replace(strings.Replace(f.String(), "StreamChunk", "StreamChunk", 1), `&`, ``, 1) + "," - } - repeatedStringForChunks += "}" - s := strings.Join([]string{`&StreamChunkedSeries{`, - `Labels:` + fmt.Sprintf("%v", this.Labels) + `,`, - `Chunks:` + repeatedStringForChunks + `,`, - `}`, - }, "") - return s -} -func (this *StreamChunk) String() string { - if this == nil { - return "nil" - } - s := strings.Join([]string{`&StreamChunk{`, - `MinTimeMs:` + fmt.Sprintf("%v", this.MinTimeMs) + `,`, - `MaxTimeMs:` + fmt.Sprintf("%v", this.MaxTimeMs) + `,`, - `Type:` + fmt.Sprintf("%v", this.Type) + `,`, - `Data:` + fmt.Sprintf("%v", this.Data) + `,`, - `}`, - }, "") - return s -} func (this *QueryRequest) String() string { if this == nil { return "nil" @@ -8325,619 +7349,6 @@ func (m *LabelValueSeriesCount) Unmarshal(dAtA []byte) error { } return nil } -func (m *ReadRequest) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: ReadRequest: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: ReadRequest: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Queries", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthIngester - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthIngester - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Queries = append(m.Queries, &QueryRequest{}) - if err := m.Queries[len(m.Queries)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: - if wireType == 0 { - var v ReadRequest_ResponseType - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= ReadRequest_ResponseType(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.AcceptedResponseTypes = append(m.AcceptedResponseTypes, v) - } else if wireType == 2 { - var packedLen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - packedLen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if packedLen < 0 { - return ErrInvalidLengthIngester - } - postIndex := iNdEx + packedLen - if postIndex < 0 { - return ErrInvalidLengthIngester - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - var elementCount int - if elementCount != 0 && len(m.AcceptedResponseTypes) == 0 { - m.AcceptedResponseTypes = make([]ReadRequest_ResponseType, 0, elementCount) - } - for iNdEx < postIndex { - var v ReadRequest_ResponseType - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= ReadRequest_ResponseType(b&0x7F) << shift - if b < 0x80 { - break - } - } - m.AcceptedResponseTypes = append(m.AcceptedResponseTypes, v) - } - } else { - return fmt.Errorf("proto: wrong wireType = %d for field AcceptedResponseTypes", wireType) - } - default: - iNdEx = preIndex - skippy, err := skipIngester(dAtA[iNdEx:]) - if err != nil { - return err - } - if skippy < 0 { - return ErrInvalidLengthIngester - } - if (iNdEx + skippy) < 0 { - return ErrInvalidLengthIngester - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *ReadResponse) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: ReadResponse: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: ReadResponse: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Results", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthIngester - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthIngester - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Results = append(m.Results, &QueryResponse{}) - if err := m.Results[len(m.Results)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipIngester(dAtA[iNdEx:]) - if err != nil { - return err - } - if skippy < 0 { - return ErrInvalidLengthIngester - } - if (iNdEx + skippy) < 0 { - return ErrInvalidLengthIngester - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *StreamReadResponse) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: StreamReadResponse: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: StreamReadResponse: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ChunkedSeries", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthIngester - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthIngester - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.ChunkedSeries = append(m.ChunkedSeries, &StreamChunkedSeries{}) - if err := m.ChunkedSeries[len(m.ChunkedSeries)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field QueryIndex", wireType) - } - m.QueryIndex = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.QueryIndex |= int64(b&0x7F) << shift - if b < 0x80 { - break - } - } - default: - iNdEx = preIndex - skippy, err := skipIngester(dAtA[iNdEx:]) - if err != nil { - return err - } - if skippy < 0 { - return ErrInvalidLengthIngester - } - if (iNdEx + skippy) < 0 { - return ErrInvalidLengthIngester - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *StreamChunkedSeries) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: StreamChunkedSeries: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: StreamChunkedSeries: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthIngester - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthIngester - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Labels = append(m.Labels, github_com_grafana_mimir_pkg_mimirpb.LabelAdapter{}) - if err := m.Labels[len(m.Labels)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Chunks", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthIngester - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthIngester - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Chunks = append(m.Chunks, StreamChunk{}) - if err := m.Chunks[len(m.Chunks)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipIngester(dAtA[iNdEx:]) - if err != nil { - return err - } - if skippy < 0 { - return ErrInvalidLengthIngester - } - if (iNdEx + skippy) < 0 { - return ErrInvalidLengthIngester - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *StreamChunk) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: StreamChunk: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: StreamChunk: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field MinTimeMs", wireType) - } - m.MinTimeMs = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.MinTimeMs |= int64(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 2: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field MaxTimeMs", wireType) - } - m.MaxTimeMs = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.MaxTimeMs |= int64(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 3: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType) - } - m.Type = 0 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - m.Type |= StreamChunk_Encoding(b&0x7F) << shift - if b < 0x80 { - break - } - } - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType) - } - var byteLen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowIngester - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - byteLen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if byteLen < 0 { - return ErrInvalidLengthIngester - } - postIndex := iNdEx + byteLen - if postIndex < 0 { - return ErrInvalidLengthIngester - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Data.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipIngester(dAtA[iNdEx:]) - if err != nil { - return err - } - if skippy < 0 { - return ErrInvalidLengthIngester - } - if (iNdEx + skippy) < 0 { - return ErrInvalidLengthIngester - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} func (m *QueryRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 diff --git a/pkg/ingester/client/ingester.proto b/pkg/ingester/client/ingester.proto index 3cfc24e2d35..4640145ec45 100644 --- a/pkg/ingester/client/ingester.proto +++ b/pkg/ingester/client/ingester.proto @@ -76,50 +76,7 @@ message LabelValueSeriesCount { map label_value_series = 2; } -message ReadRequest { - repeated QueryRequest queries = 1; - - enum ResponseType { - SAMPLES = 0; - STREAMED_XOR_CHUNKS = 1; - } - repeated ResponseType accepted_response_types = 2; -} - -message ReadResponse { - repeated QueryResponse results = 1; -} - -message StreamReadResponse { - repeated StreamChunkedSeries chunked_series = 1; - - int64 query_index = 2; -} - -message StreamChunkedSeries { - repeated cortexpb.LabelPair labels = 1 [(gogoproto.nullable) = false, (gogoproto.customtype) = "github.com/grafana/mimir/pkg/mimirpb.LabelAdapter"]; - repeated StreamChunk chunks = 2 [(gogoproto.nullable) = false]; -} - -message StreamChunk { - int64 min_time_ms = 1; - int64 max_time_ms = 2; - - enum Encoding { - UNKNOWN = 0; - XOR = 1; - HISTOGRAM = 2; - FLOAT_HISTOGRAM = 3; - } - Encoding type = 3; - bytes data = 4 [(gogoproto.nullable) = false, (gogoproto.customtype) = "github.com/grafana/mimir/pkg/mimirpb.UnsafeByteSlice"]; -} - message QueryRequest { - // This QueryRequest message is also used for remote read requests, which includes a hints field we don't support. - reserved 4; - reserved "hints"; - int64 start_timestamp_ms = 1; int64 end_timestamp_ms = 2; repeated LabelMatcher matchers = 3; diff --git a/pkg/ingester/ingester_test.go b/pkg/ingester/ingester_test.go index 0e351539ea2..2199e8233fb 100644 --- a/pkg/ingester/ingester_test.go +++ b/pkg/ingester/ingester_test.go @@ -3603,6 +3603,14 @@ func Test_Ingester_LabelValues(t *testing.T) { }) } +func l2m(lbls labels.Labels) model.Metric { + m := make(model.Metric, 16) + lbls.Range(func(l labels.Label) { + m[model.LabelName(l.Name)] = model.LabelValue(l.Value) + }) + return m +} + func Test_Ingester_Query(t *testing.T) { series := []series{ {labels.FromStrings(labels.MetricName, "test_1", "status", "200", "route", "get_user"), 1, 100000}, @@ -3631,8 +3639,8 @@ func Test_Ingester_Query(t *testing.T) { {Type: client.EQUAL, Name: model.MetricNameLabel, Value: "test_1"}, }, expected: model.Matrix{ - &model.SampleStream{Metric: util.LabelsToMetric(series[0].lbls), Values: []model.SamplePair{{Value: 1, Timestamp: 100000}}}, - &model.SampleStream{Metric: util.LabelsToMetric(series[1].lbls), Values: []model.SamplePair{{Value: 1, Timestamp: 110000}}}, + &model.SampleStream{Metric: l2m(series[0].lbls), Values: []model.SamplePair{{Value: 1, Timestamp: 100000}}}, + &model.SampleStream{Metric: l2m(series[1].lbls), Values: []model.SamplePair{{Value: 1, Timestamp: 110000}}}, }, }, "should filter series by != matcher": { @@ -3642,7 +3650,7 @@ func Test_Ingester_Query(t *testing.T) { {Type: client.NOT_EQUAL, Name: model.MetricNameLabel, Value: "test_1"}, }, expected: model.Matrix{ - &model.SampleStream{Metric: util.LabelsToMetric(series[2].lbls), Values: []model.SamplePair{{Value: 2, Timestamp: 200000}}}, + &model.SampleStream{Metric: l2m(series[2].lbls), Values: []model.SamplePair{{Value: 2, Timestamp: 200000}}}, }, }, "should filter series by =~ matcher": { @@ -3652,8 +3660,8 @@ func Test_Ingester_Query(t *testing.T) { {Type: client.REGEX_MATCH, Name: model.MetricNameLabel, Value: ".*_1"}, }, expected: model.Matrix{ - &model.SampleStream{Metric: util.LabelsToMetric(series[0].lbls), Values: []model.SamplePair{{Value: 1, Timestamp: 100000}}}, - &model.SampleStream{Metric: util.LabelsToMetric(series[1].lbls), Values: []model.SamplePair{{Value: 1, Timestamp: 110000}}}, + &model.SampleStream{Metric: l2m(series[0].lbls), Values: []model.SamplePair{{Value: 1, Timestamp: 100000}}}, + &model.SampleStream{Metric: l2m(series[1].lbls), Values: []model.SamplePair{{Value: 1, Timestamp: 110000}}}, }, }, "should filter series by !~ matcher": { @@ -3663,7 +3671,7 @@ func Test_Ingester_Query(t *testing.T) { {Type: client.REGEX_NO_MATCH, Name: model.MetricNameLabel, Value: ".*_1"}, }, expected: model.Matrix{ - &model.SampleStream{Metric: util.LabelsToMetric(series[2].lbls), Values: []model.SamplePair{{Value: 2, Timestamp: 200000}}}, + &model.SampleStream{Metric: l2m(series[2].lbls), Values: []model.SamplePair{{Value: 2, Timestamp: 200000}}}, }, }, "should filter series by multiple matchers": { @@ -3674,7 +3682,7 @@ func Test_Ingester_Query(t *testing.T) { {Type: client.REGEX_MATCH, Name: "status", Value: "5.."}, }, expected: model.Matrix{ - &model.SampleStream{Metric: util.LabelsToMetric(series[1].lbls), Values: []model.SamplePair{{Value: 1, Timestamp: 110000}}}, + &model.SampleStream{Metric: l2m(series[1].lbls), Values: []model.SamplePair{{Value: 1, Timestamp: 110000}}}, }, }, "should filter series by matcher and time range": { @@ -3684,7 +3692,7 @@ func Test_Ingester_Query(t *testing.T) { {Type: client.EQUAL, Name: model.MetricNameLabel, Value: "test_1"}, }, expected: model.Matrix{ - &model.SampleStream{Metric: util.LabelsToMetric(series[0].lbls), Values: []model.SamplePair{{Value: 1, Timestamp: 100000}}}, + &model.SampleStream{Metric: l2m(series[0].lbls), Values: []model.SamplePair{{Value: 1, Timestamp: 100000}}}, }, }, } diff --git a/pkg/mimir/runtime_config_test.go b/pkg/mimir/runtime_config_test.go index 5224becb5d6..7c1cdc21851 100644 --- a/pkg/mimir/runtime_config_test.go +++ b/pkg/mimir/runtime_config_test.go @@ -10,6 +10,7 @@ import ( "strings" "testing" + "github.com/google/go-cmp/cmp" "github.com/grafana/dskit/flagext" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -54,9 +55,9 @@ overrides: loadedLimits := runtimeCfg.(*runtimeConfigValues).TenantLimits require.Equal(t, 3, len(loadedLimits)) - require.Equal(t, expected, *loadedLimits["1234"]) - require.Equal(t, expected, *loadedLimits["1235"]) - require.Equal(t, expected, *loadedLimits["1236"]) + require.True(t, cmp.Equal(expected, *loadedLimits["1234"], cmp.AllowUnexported(validation.Limits{}))) + require.True(t, cmp.Equal(expected, *loadedLimits["1235"], cmp.AllowUnexported(validation.Limits{}))) + require.True(t, cmp.Equal(expected, *loadedLimits["1236"], cmp.AllowUnexported(validation.Limits{}))) } func TestRuntimeConfigLoader_ShouldLoadEmptyFile(t *testing.T) { diff --git a/pkg/mimirpb/compat.go b/pkg/mimirpb/compat.go index fed2874be42..8242bda1774 100644 --- a/pkg/mimirpb/compat.go +++ b/pkg/mimirpb/compat.go @@ -21,8 +21,6 @@ import ( "github.com/prometheus/prometheus/model/histogram" "github.com/prometheus/prometheus/promql" "github.com/prometheus/prometheus/util/jsonutil" - - "github.com/grafana/mimir/pkg/util" ) // ToWriteRequest converts matched slices of Labels, Samples, Exemplars, and Metadata into a WriteRequest @@ -101,7 +99,24 @@ func (req *WriteRequest) AddExemplarsAt(i int, exemplars []*Exemplar) *WriteRequ // FromLabelAdaptersToMetric converts []LabelAdapter to a model.Metric. // Don't do this on any performance sensitive paths. func FromLabelAdaptersToMetric(ls []LabelAdapter) model.Metric { - return util.LabelsToMetric(FromLabelAdaptersToLabels(ls)) + m := make(model.Metric, len(ls)) + for _, la := range ls { + m[model.LabelName(la.Name)] = model.LabelValue(la.Value) + } + return m +} + +// FromLabelAdaptersToKeyString makes a string to be used as a key to a map. +// It's much simpler than FromLabelAdaptersToString, but not human-readable. +func FromLabelAdaptersToKeyString(ls []LabelAdapter) string { + buf := make([]byte, 0, 1024) + for i := range ls { + buf = append(buf, '\xff') + buf = append(buf, ls[i].Name...) + buf = append(buf, '\xff') + buf = append(buf, ls[i].Value...) + } + return string(buf) } // FromLabelAdaptersToString formats label adapters as a metric name with labels, while preserving diff --git a/pkg/mimirpb/compat_test.go b/pkg/mimirpb/compat_test.go index cb8d24f2781..70c2df58843 100644 --- a/pkg/mimirpb/compat_test.go +++ b/pkg/mimirpb/compat_test.go @@ -220,7 +220,7 @@ func TestFromFPointsToSamples(t *testing.T) { // Check that Prometheus FPoint and Mimir Sample types converted // into each other with unsafe.Pointer are compatible func TestPrometheusFPointInSyncWithMimirPbSample(t *testing.T) { - test.RequireSameShape(t, promql.FPoint{}, Sample{}, true) + test.RequireSameShape(t, promql.FPoint{}, Sample{}, true, false) } func BenchmarkFromFPointsToSamples(b *testing.B) { @@ -248,7 +248,7 @@ func TestFromHPointsToHistograms(t *testing.T) { // Check that Prometheus HPoint and Mimir FloatHistogramPair types converted // into each other with unsafe.Pointer are compatible func TestPrometheusHPointInSyncWithMimirPbFloatHistogramPair(t *testing.T) { - test.RequireSameShape(t, promql.HPoint{}, FloatHistogramPair{}, true) + test.RequireSameShape(t, promql.HPoint{}, FloatHistogramPair{}, true, false) } func BenchmarkFromHPointsToHistograms(b *testing.B) { @@ -625,7 +625,7 @@ func TestFromFloatHistogramToPromHistogram(t *testing.T) { // Check that Prometheus and Mimir SampleHistogram types converted // into each other with unsafe.Pointer are compatible func TestPrometheusSampleHistogramInSyncWithMimirPbSampleHistogram(t *testing.T) { - test.RequireSameShape(t, model.SampleHistogram{}, SampleHistogram{}, false) + test.RequireSameShape(t, model.SampleHistogram{}, SampleHistogram{}, false, false) } // Check that Prometheus Label and MimirPb LabelAdapter types @@ -739,3 +739,57 @@ func TestCompareLabelAdapters(t *testing.T) { require.Equal(t, -sign(test.expected), sign(got), "unexpected comparison result for reverse test case %d", i) } } + +func TestRemoteWriteV1HistogramEquivalence(t *testing.T) { + test.RequireSameShape(t, prompb.Histogram{}, Histogram{}, false, true) +} + +// The main usecase for `LabelsToKeyString` is to generate hashKeys +// for maps. We are benchmarking that here. +func BenchmarkSeriesMap(b *testing.B) { + benchmarkSeriesMap(100000, b) +} + +func benchmarkSeriesMap(numSeries int, b *testing.B) { + series := makeSeries(numSeries) + sm := make(map[string]int, numSeries) + + b.ReportAllocs() + b.ResetTimer() + for n := 0; n < b.N; n++ { + for i, s := range series { + sm[FromLabelAdaptersToKeyString(s)] = i + } + + for _, s := range series { + _, ok := sm[FromLabelAdaptersToKeyString(s)] + if !ok { + b.Fatal("element missing") + } + } + + if len(sm) != numSeries { + b.Fatal("the number of series expected:", numSeries, "got:", len(sm)) + } + } +} + +func makeSeries(n int) [][]LabelAdapter { + series := make([][]LabelAdapter, 0, n) + for i := 0; i < n; i++ { + series = append(series, []LabelAdapter{ + {Name: "label0", Value: "value0"}, + {Name: "label1", Value: "value1"}, + {Name: "label2", Value: "value2"}, + {Name: "label3", Value: "value3"}, + {Name: "label4", Value: "value4"}, + {Name: "label5", Value: "value5"}, + {Name: "label6", Value: "value6"}, + {Name: "label7", Value: "value7"}, + {Name: "label8", Value: "value8"}, + {Name: "label9", Value: strconv.Itoa(i)}, + }) + } + + return series +} diff --git a/pkg/mimirpb/query_response_extra_test.go b/pkg/mimirpb/query_response_extra_test.go index 7b112537146..a09018c3bcd 100644 --- a/pkg/mimirpb/query_response_extra_test.go +++ b/pkg/mimirpb/query_response_extra_test.go @@ -77,7 +77,7 @@ func extractPrometheusStrings(t *testing.T, constantType string) []string { // FloatHistogram types converted into each other with unsafe.Pointer // are compatible func TestFloatHistogramProtobufTypeRemainsInSyncWithPrometheus(t *testing.T) { - test.RequireSameShape(t, histogram.FloatHistogram{}, FloatHistogram{}, false) + test.RequireSameShape(t, histogram.FloatHistogram{}, FloatHistogram{}, false, false) } // This example is from an investigation into a bug in the ruler. Keeping it here for future reference. diff --git a/pkg/mimirtool/config/inspect.go b/pkg/mimirtool/config/inspect.go index f49377a93b8..7d45d0a55f9 100644 --- a/pkg/mimirtool/config/inspect.go +++ b/pkg/mimirtool/config/inspect.go @@ -29,7 +29,7 @@ var ( type InspectedEntryFactory func() *InspectedEntry // InspectedEntry is the structure that holds a configuration block or a single configuration parameters. -// Blocks contain other other InspectedEntries. +// Blocks contain other InspectedEntries. type EntryKind string diff --git a/pkg/querier/partitioner.go b/pkg/querier/partitioner.go index 79859630fbc..24c3ecd4bb9 100644 --- a/pkg/querier/partitioner.go +++ b/pkg/querier/partitioner.go @@ -11,7 +11,6 @@ import ( "github.com/prometheus/prometheus/storage" "github.com/prometheus/prometheus/tsdb/chunkenc" - "github.com/grafana/mimir/pkg/ingester/client" "github.com/grafana/mimir/pkg/querier/batch" "github.com/grafana/mimir/pkg/storage/chunk" seriesset "github.com/grafana/mimir/pkg/storage/series" @@ -20,9 +19,10 @@ import ( // Series in the returned set are sorted alphabetically by labels. func partitionChunks(chunks []chunk.Chunk, mint, maxt int64) storage.SeriesSet { chunksBySeries := map[string][]chunk.Chunk{} + var buf [1024]byte for _, c := range chunks { - key := client.LabelsToKeyString(c.Metric) - chunksBySeries[key] = append(chunksBySeries[key], c) + key := c.Metric.Bytes(buf[:0]) + chunksBySeries[string(key)] = append(chunksBySeries[string(key)], c) } series := make([]storage.Series, 0, len(chunksBySeries)) diff --git a/pkg/querier/remote_read.go b/pkg/querier/remote_read.go index fbb1a4d9d32..b65aa9ccaf3 100644 --- a/pkg/querier/remote_read.go +++ b/pkg/querier/remote_read.go @@ -15,14 +15,15 @@ import ( "github.com/go-kit/log/level" "github.com/gogo/protobuf/proto" "github.com/pkg/errors" + "github.com/prometheus/common/model" + "github.com/prometheus/prometheus/model/labels" + "github.com/prometheus/prometheus/prompb" "github.com/prometheus/prometheus/promql" "github.com/prometheus/prometheus/storage" prom_remote "github.com/prometheus/prometheus/storage/remote" "github.com/prometheus/prometheus/tsdb/chunkenc" "github.com/prometheus/prometheus/tsdb/chunks" - "github.com/grafana/mimir/pkg/ingester/client" - "github.com/grafana/mimir/pkg/mimirpb" "github.com/grafana/mimir/pkg/querier/api" "github.com/grafana/mimir/pkg/util" util_log "github.com/grafana/mimir/pkg/util/log" @@ -49,7 +50,7 @@ func RemoteReadHandler(q storage.SampleAndChunkQueryable, logger log.Logger) htt func remoteReadHandler(q storage.SampleAndChunkQueryable, maxBytesInFrame int, lg log.Logger) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { ctx := r.Context() - var req client.ReadRequest + var req prompb.ReadRequest logger := util_log.WithContext(r.Context(), lg) if _, err := util.ParseProtoReader(ctx, r.Body, int(r.ContentLength), MaxRemoteReadQuerySize, nil, &req, util.RawSnappy); err != nil { level.Error(logger).Log("msg", "failed to parse proto", "err", err.Error()) @@ -64,7 +65,7 @@ func remoteReadHandler(q storage.SampleAndChunkQueryable, maxBytesInFrame int, l } switch respType { - case client.STREAMED_XOR_CHUNKS: + case prompb.ReadRequest_STREAMED_XOR_CHUNKS: remoteReadStreamedXORChunks(ctx, q, w, &req, maxBytesInFrame, logger) default: remoteReadSamples(ctx, q, w, &req, logger) @@ -76,18 +77,18 @@ func remoteReadSamples( ctx context.Context, q storage.Queryable, w http.ResponseWriter, - req *client.ReadRequest, + req *prompb.ReadRequest, logger log.Logger, ) { - resp := client.ReadResponse{ - Results: make([]*client.QueryResponse, len(req.Queries)), + resp := prompb.ReadResponse{ + Results: make([]*prompb.QueryResult, len(req.Queries)), } // Fetch samples for all queries in parallel. errCh := make(chan error) for i, qr := range req.Queries { - go func(i int, qr *client.QueryRequest) { - from, to, matchers, err := client.FromQueryRequest(qr) + go func(i int, qr *prompb.Query) { + from, to, matchers, err := queryFromRemoteReadQuery(qr) if err != nil { errCh <- err return @@ -104,7 +105,7 @@ func remoteReadSamples( End: int64(to), } seriesSet := querier.Select(ctx, false, params, matchers...) - resp.Results[i], err = seriesSetToQueryResponse(seriesSet) + resp.Results[i], err = seriesSetToQueryResult(seriesSet) errCh <- err }(i, qr) } @@ -136,7 +137,7 @@ func remoteReadStreamedXORChunks( ctx context.Context, q storage.ChunkQueryable, w http.ResponseWriter, - req *client.ReadRequest, + req *prompb.ReadRequest, maxBytesInFrame int, logger log.Logger, ) { @@ -178,13 +179,13 @@ func remoteReadErrorStatusCode(err error) int { func processReadStreamedQueryRequest( ctx context.Context, idx int, - queryReq *client.QueryRequest, + queryReq *prompb.Query, q storage.ChunkQueryable, w http.ResponseWriter, f http.Flusher, maxBytesInFrame int, ) error { - from, to, matchers, err := client.FromQueryRequest(queryReq) + from, to, matchers, err := queryFromRemoteReadQuery(queryReq) if err != nil { return err } @@ -208,29 +209,29 @@ func processReadStreamedQueryRequest( ) } -func seriesSetToQueryResponse(s storage.SeriesSet) (*client.QueryResponse, error) { - result := &client.QueryResponse{} +func seriesSetToQueryResult(s storage.SeriesSet) (*prompb.QueryResult, error) { + result := &prompb.QueryResult{} var it chunkenc.Iterator for s.Next() { series := s.At() - samples := []mimirpb.Sample{} - histograms := []mimirpb.Histogram{} + samples := []prompb.Sample{} + histograms := []prompb.Histogram{} it = series.Iterator(it) for valType := it.Next(); valType != chunkenc.ValNone; valType = it.Next() { switch valType { case chunkenc.ValFloat: t, v := it.At() - samples = append(samples, mimirpb.Sample{ - TimestampMs: t, - Value: v, + samples = append(samples, prompb.Sample{ + Timestamp: t, + Value: v, }) case chunkenc.ValHistogram: t, h := it.AtHistogram(nil) // Nil argument as we pass the data to the protobuf as-is without copy. - histograms = append(histograms, mimirpb.FromHistogramToHistogramProto(t, h)) + histograms = append(histograms, prom_remote.HistogramToHistogramProto(t, h)) case chunkenc.ValFloatHistogram: t, h := it.AtFloatHistogram(nil) // Nil argument as we pass the data to the protobuf as-is without copy. - histograms = append(histograms, mimirpb.FromFloatHistogramToHistogramProto(t, h)) + histograms = append(histograms, prom_remote.FloatHistogramToHistogramProto(t, h)) default: return nil, fmt.Errorf("unsupported value type: %v", valType) } @@ -240,8 +241,8 @@ func seriesSetToQueryResponse(s storage.SeriesSet) (*client.QueryResponse, error return nil, err } - ts := mimirpb.TimeSeries{ - Labels: mimirpb.FromLabelsToLabelAdapters(series.Labels()), + ts := &prompb.TimeSeries{ + Labels: prom_remote.LabelsToLabelsProto(series.Labels(), nil), Samples: samples, Histograms: histograms, } @@ -252,14 +253,14 @@ func seriesSetToQueryResponse(s storage.SeriesSet) (*client.QueryResponse, error return result, s.Err() } -func negotiateResponseType(accepted []client.ReadRequest_ResponseType) (client.ReadRequest_ResponseType, error) { +func negotiateResponseType(accepted []prompb.ReadRequest_ResponseType) (prompb.ReadRequest_ResponseType, error) { if len(accepted) == 0 { - return client.SAMPLES, nil + return prompb.ReadRequest_SAMPLES, nil } - supported := map[client.ReadRequest_ResponseType]struct{}{ - client.SAMPLES: {}, - client.STREAMED_XOR_CHUNKS: {}, + supported := map[prompb.ReadRequest_ResponseType]struct{}{ + prompb.ReadRequest_SAMPLES: {}, + prompb.ReadRequest_STREAMED_XOR_CHUNKS: {}, } for _, resType := range accepted { @@ -272,15 +273,15 @@ func negotiateResponseType(accepted []client.ReadRequest_ResponseType) (client.R func streamChunkedReadResponses(stream io.Writer, ss storage.ChunkSeriesSet, queryIndex, maxBytesInFrame int) error { var ( - chks []client.StreamChunk - lbls []mimirpb.LabelAdapter + chks []prompb.Chunk + lbls []prompb.Label ) var iter chunks.Iterator for ss.Next() { series := ss.At() iter = series.Iterator(iter) - lbls = mimirpb.FromLabelsToLabelAdapters(series.Labels()) + lbls = prom_remote.LabelsToLabelsProto(series.Labels(), nil) frameBytesRemaining := initializedFrameBytesRemaining(maxBytesInFrame, lbls) isNext := iter.Next() @@ -293,10 +294,10 @@ func streamChunkedReadResponses(stream io.Writer, ss storage.ChunkSeriesSet, que } // Cut the chunk. - chks = append(chks, client.StreamChunk{ + chks = append(chks, prompb.Chunk{ MinTimeMs: chk.MinTime, MaxTimeMs: chk.MaxTime, - Type: client.StreamChunk_Encoding(chk.Chunk.Encoding()), + Type: prompb.Chunk_Encoding(chk.Chunk.Encoding()), Data: chk.Chunk.Bytes(), }) frameBytesRemaining -= chks[len(chks)-1].Size() @@ -307,8 +308,8 @@ func streamChunkedReadResponses(stream io.Writer, ss storage.ChunkSeriesSet, que continue } - b, err := proto.Marshal(&client.StreamReadResponse{ - ChunkedSeries: []*client.StreamChunkedSeries{ + b, err := proto.Marshal(&prompb.ChunkedReadResponse{ + ChunkedSeries: []*prompb.ChunkedSeries{ { Labels: lbls, Chunks: chks, @@ -333,10 +334,22 @@ func streamChunkedReadResponses(stream io.Writer, ss storage.ChunkSeriesSet, que return ss.Err() } -func initializedFrameBytesRemaining(maxBytesInFrame int, lbls []mimirpb.LabelAdapter) int { +func initializedFrameBytesRemaining(maxBytesInFrame int, lbls []prompb.Label) int { frameBytesLeft := maxBytesInFrame for _, lbl := range lbls { frameBytesLeft -= lbl.Size() } return frameBytesLeft } + +// queryFromRemoteReadQuery returns the queried time range and label matchers for the given remote +// read request query. +func queryFromRemoteReadQuery(query *prompb.Query) (from, to model.Time, matchers []*labels.Matcher, err error) { + matchers, err = prom_remote.FromLabelMatchers(query.Matchers) + if err != nil { + return + } + from = model.Time(query.StartTimestampMs) + to = model.Time(query.EndTimestampMs) + return +} diff --git a/pkg/querier/remote_read_test.go b/pkg/querier/remote_read_test.go index 1f071f61618..1d0b806d5d1 100644 --- a/pkg/querier/remote_read_test.go +++ b/pkg/querier/remote_read_test.go @@ -20,6 +20,7 @@ import ( "github.com/pkg/errors" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/model/labels" + "github.com/prometheus/prometheus/prompb" "github.com/prometheus/prometheus/promql" "github.com/prometheus/prometheus/storage" prom_remote "github.com/prometheus/prometheus/storage/remote" @@ -27,7 +28,6 @@ import ( "github.com/prometheus/prometheus/util/annotations" "github.com/stretchr/testify/require" - "github.com/grafana/mimir/pkg/ingester/client" "github.com/grafana/mimir/pkg/mimirpb" "github.com/grafana/mimir/pkg/querier/api" "github.com/grafana/mimir/pkg/storage/series" @@ -49,26 +49,30 @@ func (m mockSampleAndChunkQueryable) ChunkQuerier(mint, maxt int64) (storage.Chu type mockQuerier struct { storage.Querier - seriesSet storage.SeriesSet + + selectFn func(ctx context.Context, sorted bool, hints *storage.SelectHints, matchers ...*labels.Matcher) storage.SeriesSet } -func (m mockQuerier) Select(_ context.Context, _ bool, sp *storage.SelectHints, _ ...*labels.Matcher) storage.SeriesSet { - if sp == nil { - panic("mockQuerier: select params must be set") +func (m mockQuerier) Select(ctx context.Context, sorted bool, hints *storage.SelectHints, matchers ...*labels.Matcher) storage.SeriesSet { + if m.selectFn != nil { + return m.selectFn(ctx, sorted, hints, matchers...) } - return m.seriesSet + + return storage.ErrSeriesSet(errors.New("the Select() function has not been mocked in the test")) } type mockChunkQuerier struct { storage.ChunkQuerier - seriesSet storage.SeriesSet + + selectFn func(ctx context.Context, sorted bool, hints *storage.SelectHints, matchers ...*labels.Matcher) storage.ChunkSeriesSet } -func (m mockChunkQuerier) Select(_ context.Context, _ bool, sp *storage.SelectHints, _ ...*labels.Matcher) storage.ChunkSeriesSet { - if sp == nil { - panic("mockChunkQuerier: select params must be set") +func (m mockChunkQuerier) Select(ctx context.Context, sorted bool, hints *storage.SelectHints, matchers ...*labels.Matcher) storage.ChunkSeriesSet { + if m.selectFn != nil { + return m.selectFn(ctx, sorted, hints, matchers...) } - return storage.NewSeriesSetToChunkSet(m.seriesSet) + + return storage.ErrChunkSeriesSet(errors.New("the Select() function has not been mocked in the test")) } type partiallyFailingSeriesSet struct { @@ -100,89 +104,126 @@ func (p *partiallyFailingSeriesSet) Warnings() annotations.Annotations { return p.ss.Warnings() } -func TestSampledRemoteRead(t *testing.T) { - q := &mockSampleAndChunkQueryable{ - queryableFn: func(int64, int64) (storage.Querier, error) { - return mockQuerier{ - seriesSet: series.NewConcreteSeriesSetFromUnsortedSeries([]storage.Series{ - series.NewConcreteSeries( - labels.FromStrings("foo", "bar"), - []model.SamplePair{{Timestamp: 0, Value: 0}, {Timestamp: 1, Value: 1}, {Timestamp: 2, Value: 2}, {Timestamp: 3, Value: 3}}, - []mimirpb.Histogram{mimirpb.FromHistogramToHistogramProto(4, test.GenerateTestHistogram(4))}, - ), - }), - }, nil +func TestRemoteReadHandler_Samples(t *testing.T) { + queries := map[string]struct { + query *prompb.Query + expectedQueriedStart int64 + expectedQueriedEnd int64 + }{ + "query without hints": { + query: &prompb.Query{ + StartTimestampMs: 1, + EndTimestampMs: 10, + }, + expectedQueriedStart: 1, + expectedQueriedEnd: 10, + }, + "query with hints": { + query: &prompb.Query{ + StartTimestampMs: 1, + EndTimestampMs: 10, + Hints: &prompb.ReadHints{ + StartMs: 2, + EndMs: 9, + }, + }, + expectedQueriedStart: 1, // Hints are currently ignored. + expectedQueriedEnd: 10, // Hints are currently ignored. }, } - handler := RemoteReadHandler(q, log.NewNopLogger()) - requestBody, err := proto.Marshal(&client.ReadRequest{ - Queries: []*client.QueryRequest{ - {StartTimestampMs: 0, EndTimestampMs: 10}, - }, - }) - require.NoError(t, err) - requestBody = snappy.Encode(nil, requestBody) - request, err := http.NewRequest(http.MethodPost, "/api/v1/read", bytes.NewReader(requestBody)) - require.NoError(t, err) - request.Header.Set("X-Prometheus-Remote-Read-Version", "0.1.0") - - recorder := httptest.NewRecorder() - handler.ServeHTTP(recorder, request) - - require.Equal(t, 200, recorder.Result().StatusCode) - require.Equal(t, []string([]string{"application/x-protobuf"}), recorder.Result().Header["Content-Type"]) - responseBody, err := io.ReadAll(recorder.Result().Body) - require.NoError(t, err) - responseBody, err = snappy.Decode(nil, responseBody) - require.NoError(t, err) - var response client.ReadResponse - err = proto.Unmarshal(responseBody, &response) - require.NoError(t, err) - - expected := client.ReadResponse{ - Results: []*client.QueryResponse{ - { - Timeseries: []mimirpb.TimeSeries{ - { - Labels: []mimirpb.LabelAdapter{ - {Name: "foo", Value: "bar"}, - }, - Samples: []mimirpb.Sample{ - {Value: 0, TimestampMs: 0}, - {Value: 1, TimestampMs: 1}, - {Value: 2, TimestampMs: 2}, - {Value: 3, TimestampMs: 3}, + for queryType, queryData := range queries { + t.Run(queryType, func(t *testing.T) { + var actualQueriedStart, actualQueriedEnd int64 + + q := &mockSampleAndChunkQueryable{ + queryableFn: func(_, _ int64) (storage.Querier, error) { + return mockQuerier{ + selectFn: func(_ context.Context, _ bool, hints *storage.SelectHints, _ ...*labels.Matcher) storage.SeriesSet { + require.NotNil(t, hints, "select hints must be set") + actualQueriedStart, actualQueriedEnd = hints.Start, hints.End + + return series.NewConcreteSeriesSetFromUnsortedSeries([]storage.Series{ + series.NewConcreteSeries( + labels.FromStrings("foo", "bar"), + []model.SamplePair{{Timestamp: 1, Value: 1}, {Timestamp: 2, Value: 2}, {Timestamp: 3, Value: 3}}, + []mimirpb.Histogram{mimirpb.FromHistogramToHistogramProto(4, test.GenerateTestHistogram(4))}, + ), + }) }, - Histograms: []mimirpb.Histogram{ - mimirpb.FromHistogramToHistogramProto(4, test.GenerateTestHistogram(4)), + }, nil + }, + } + handler := RemoteReadHandler(q, log.NewNopLogger()) + + requestBody, err := proto.Marshal(&prompb.ReadRequest{Queries: []*prompb.Query{queryData.query}}) + require.NoError(t, err) + requestBody = snappy.Encode(nil, requestBody) + request, err := http.NewRequest(http.MethodPost, "/api/v1/read", bytes.NewReader(requestBody)) + require.NoError(t, err) + request.Header.Set("X-Prometheus-Remote-Read-Version", "0.1.0") + + recorder := httptest.NewRecorder() + handler.ServeHTTP(recorder, request) + + require.Equal(t, 200, recorder.Result().StatusCode) + require.Equal(t, []string{"application/x-protobuf"}, recorder.Result().Header["Content-Type"]) + responseBody, err := io.ReadAll(recorder.Result().Body) + require.NoError(t, err) + responseBody, err = snappy.Decode(nil, responseBody) + require.NoError(t, err) + var response prompb.ReadResponse + err = proto.Unmarshal(responseBody, &response) + require.NoError(t, err) + + expected := prompb.ReadResponse{ + Results: []*prompb.QueryResult{ + { + Timeseries: []*prompb.TimeSeries{ + { + Labels: []prompb.Label{ + {Name: "foo", Value: "bar"}, + }, + Samples: []prompb.Sample{ + {Value: 1, Timestamp: 1}, + {Value: 2, Timestamp: 2}, + {Value: 3, Timestamp: 3}, + }, + Histograms: []prompb.Histogram{ + prom_remote.HistogramToHistogramProto(4, test.GenerateTestHistogram(4)), + }, + }, }, }, }, - }, - }, + } + require.Equal(t, expected, response) + + // Ensure the time range passed down to the queryable is the expected one. + require.Equal(t, queryData.expectedQueriedStart, actualQueriedStart) + require.Equal(t, queryData.expectedQueriedEnd, actualQueriedEnd) + }) } - require.Equal(t, expected, response) } -func TestStreamedRemoteRead(t *testing.T) { - tcs := map[string]struct { +func TestRemoteReadHandler_StreamedXORChunks(t *testing.T) { + tests := map[string]struct { samples []model.SamplePair histograms []mimirpb.Histogram - expectedResults []*client.StreamReadResponse + expectedResults []*prompb.ChunkedReadResponse }{ "with 120 samples, we expect 1 frame with 1 chunk": { samples: getNSamples(120), - expectedResults: []*client.StreamReadResponse{ + expectedResults: []*prompb.ChunkedReadResponse{ { - ChunkedSeries: []*client.StreamChunkedSeries{ + ChunkedSeries: []*prompb.ChunkedSeries{ { - Labels: []mimirpb.LabelAdapter{{Name: "foo", Value: "bar"}}, - Chunks: []client.StreamChunk{ + Labels: []prompb.Label{{Name: "foo", Value: "bar"}}, + Chunks: []prompb.Chunk{ { MinTimeMs: 0, MaxTimeMs: 119, - Type: client.XOR, + Type: prompb.Chunk_XOR, Data: getIndexedChunk(0, 120, chunkenc.EncXOR), }, }, @@ -194,22 +235,22 @@ func TestStreamedRemoteRead(t *testing.T) { }, "with 121 samples, we expect 1 frame with 2 chunks": { samples: getNSamples(121), - expectedResults: []*client.StreamReadResponse{ + expectedResults: []*prompb.ChunkedReadResponse{ { - ChunkedSeries: []*client.StreamChunkedSeries{ + ChunkedSeries: []*prompb.ChunkedSeries{ { - Labels: []mimirpb.LabelAdapter{{Name: "foo", Value: "bar"}}, - Chunks: []client.StreamChunk{ + Labels: []prompb.Label{{Name: "foo", Value: "bar"}}, + Chunks: []prompb.Chunk{ { MinTimeMs: 0, MaxTimeMs: 119, - Type: client.XOR, + Type: prompb.Chunk_XOR, Data: getIndexedChunk(0, 121, chunkenc.EncXOR), }, { MinTimeMs: 120, MaxTimeMs: 120, - Type: client.XOR, + Type: prompb.Chunk_XOR, Data: getIndexedChunk(1, 121, chunkenc.EncXOR), }, }, @@ -221,22 +262,22 @@ func TestStreamedRemoteRead(t *testing.T) { }, "with 481 samples, we expect 2 frames with 2 chunks, and 1 frame with 1 chunk due to frame limit": { samples: getNSamples(481), - expectedResults: []*client.StreamReadResponse{ + expectedResults: []*prompb.ChunkedReadResponse{ { - ChunkedSeries: []*client.StreamChunkedSeries{ + ChunkedSeries: []*prompb.ChunkedSeries{ { - Labels: []mimirpb.LabelAdapter{{Name: "foo", Value: "bar"}}, - Chunks: []client.StreamChunk{ + Labels: []prompb.Label{{Name: "foo", Value: "bar"}}, + Chunks: []prompb.Chunk{ { MinTimeMs: 0, MaxTimeMs: 119, - Type: client.XOR, + Type: prompb.Chunk_XOR, Data: getIndexedChunk(0, 481, chunkenc.EncXOR), }, { MinTimeMs: 120, MaxTimeMs: 239, - Type: client.XOR, + Type: prompb.Chunk_XOR, Data: getIndexedChunk(1, 481, chunkenc.EncXOR), }, }, @@ -244,20 +285,20 @@ func TestStreamedRemoteRead(t *testing.T) { }, }, { - ChunkedSeries: []*client.StreamChunkedSeries{ + ChunkedSeries: []*prompb.ChunkedSeries{ { - Labels: []mimirpb.LabelAdapter{{Name: "foo", Value: "bar"}}, - Chunks: []client.StreamChunk{ + Labels: []prompb.Label{{Name: "foo", Value: "bar"}}, + Chunks: []prompb.Chunk{ { MinTimeMs: 240, MaxTimeMs: 359, - Type: client.XOR, + Type: prompb.Chunk_XOR, Data: getIndexedChunk(2, 481, chunkenc.EncXOR), }, { MinTimeMs: 360, MaxTimeMs: 479, - Type: client.XOR, + Type: prompb.Chunk_XOR, Data: getIndexedChunk(3, 481, chunkenc.EncXOR), }, }, @@ -265,14 +306,14 @@ func TestStreamedRemoteRead(t *testing.T) { }, }, { - ChunkedSeries: []*client.StreamChunkedSeries{ + ChunkedSeries: []*prompb.ChunkedSeries{ { - Labels: []mimirpb.LabelAdapter{{Name: "foo", Value: "bar"}}, - Chunks: []client.StreamChunk{ + Labels: []prompb.Label{{Name: "foo", Value: "bar"}}, + Chunks: []prompb.Chunk{ { MinTimeMs: 480, MaxTimeMs: 480, - Type: client.XOR, + Type: prompb.Chunk_XOR, Data: getIndexedChunk(4, 481, chunkenc.EncXOR), }, }, @@ -283,16 +324,16 @@ func TestStreamedRemoteRead(t *testing.T) { }, "120 native histograms": { histograms: getNHistogramSamples(120), - expectedResults: []*client.StreamReadResponse{ + expectedResults: []*prompb.ChunkedReadResponse{ { - ChunkedSeries: []*client.StreamChunkedSeries{ + ChunkedSeries: []*prompb.ChunkedSeries{ { - Labels: []mimirpb.LabelAdapter{{Name: "foo", Value: "bar"}}, - Chunks: []client.StreamChunk{ + Labels: []prompb.Label{{Name: "foo", Value: "bar"}}, + Chunks: []prompb.Chunk{ { MinTimeMs: 0, MaxTimeMs: 119, - Type: client.HISTOGRAM, + Type: prompb.Chunk_HISTOGRAM, Data: getIndexedChunk(0, 120, chunkenc.EncHistogram), }, }, @@ -304,16 +345,16 @@ func TestStreamedRemoteRead(t *testing.T) { }, "120 native float histograms": { histograms: getNFloatHistogramSamples(120), - expectedResults: []*client.StreamReadResponse{ + expectedResults: []*prompb.ChunkedReadResponse{ { - ChunkedSeries: []*client.StreamChunkedSeries{ + ChunkedSeries: []*prompb.ChunkedSeries{ { - Labels: []mimirpb.LabelAdapter{{Name: "foo", Value: "bar"}}, - Chunks: []client.StreamChunk{ + Labels: []prompb.Label{{Name: "foo", Value: "bar"}}, + Chunks: []prompb.Chunk{ { MinTimeMs: 0, MaxTimeMs: 119, - Type: client.FLOAT_HISTOGRAM, + Type: prompb.Chunk_FLOAT_HISTOGRAM, Data: getIndexedChunk(0, 120, chunkenc.EncFloatHistogram), }, }, @@ -323,63 +364,106 @@ func TestStreamedRemoteRead(t *testing.T) { }, }, } - for tn, tc := range tcs { - t.Run(tn, func(t *testing.T) { - q := &mockSampleAndChunkQueryable{ - chunkQueryableFn: func(int64, int64) (storage.ChunkQuerier, error) { - return mockChunkQuerier{ - seriesSet: series.NewConcreteSeriesSetFromUnsortedSeries([]storage.Series{ - series.NewConcreteSeries( - labels.FromStrings("foo", "bar"), - tc.samples, - tc.histograms, - ), - }), - }, nil - }, - } - // The labelset for this test has 10 bytes and a full chunk is roughly 165 bytes; for this test we want a - // frame to contain at most 2 chunks. - maxBytesInFrame := 10 + 165*2 - - handler := remoteReadHandler(q, maxBytesInFrame, log.NewNopLogger()) - requestBody, err := proto.Marshal(&client.ReadRequest{ - Queries: []*client.QueryRequest{ - {StartTimestampMs: 0, EndTimestampMs: 10}, + queries := map[string]struct { + query *prompb.Query + expectedQueriedStart int64 + expectedQueriedEnd int64 + }{ + "query without hints": { + query: &prompb.Query{ + StartTimestampMs: 1, + EndTimestampMs: 10, + }, + expectedQueriedStart: 1, + expectedQueriedEnd: 10, + }, + "query with hints": { + query: &prompb.Query{ + StartTimestampMs: 1, + EndTimestampMs: 10, + Hints: &prompb.ReadHints{ + StartMs: 2, + EndMs: 9, }, - AcceptedResponseTypes: []client.ReadRequest_ResponseType{client.STREAMED_XOR_CHUNKS}, - }) - require.NoError(t, err) - requestBody = snappy.Encode(nil, requestBody) - request, err := http.NewRequest(http.MethodPost, "/api/v1/read", bytes.NewReader(requestBody)) - require.NoError(t, err) - request.Header.Set("X-Prometheus-Remote-Read-Version", "0.1.0") - - recorder := httptest.NewRecorder() - handler.ServeHTTP(recorder, request) - - require.Equal(t, 200, recorder.Result().StatusCode) - require.Equal(t, []string{api.ContentTypeRemoteReadStreamedChunks}, recorder.Result().Header["Content-Type"]) - - stream := prom_remote.NewChunkedReader(recorder.Result().Body, prom_remote.DefaultChunkedReadLimit, nil) - - i := 0 - for { - var res client.StreamReadResponse - err := stream.NextProto(&res) - if errors.Is(err, io.EOF) { - break - } - require.NoError(t, err) + }, + expectedQueriedStart: 1, // Hints are currently ignored. + expectedQueriedEnd: 10, // Hints are currently ignored. + }, + } - if len(tc.expectedResults) < i+1 { - require.Fail(t, "unexpected result message") - } - require.Equal(t, tc.expectedResults[i], &res) - i++ + for testName, testData := range tests { + t.Run(testName, func(t *testing.T) { + for queryType, queryData := range queries { + t.Run(queryType, func(t *testing.T) { + var actualQueriedStart, actualQueriedEnd int64 + + q := &mockSampleAndChunkQueryable{ + chunkQueryableFn: func(int64, int64) (storage.ChunkQuerier, error) { + return mockChunkQuerier{ + selectFn: func(_ context.Context, _ bool, hints *storage.SelectHints, _ ...*labels.Matcher) storage.ChunkSeriesSet { + require.NotNil(t, hints, "select hints must be set") + actualQueriedStart, actualQueriedEnd = hints.Start, hints.End + + return storage.NewSeriesSetToChunkSet( + series.NewConcreteSeriesSetFromUnsortedSeries([]storage.Series{ + series.NewConcreteSeries( + labels.FromStrings("foo", "bar"), + testData.samples, + testData.histograms, + ), + }), + ) + }, + }, nil + }, + } + // The labelset for this test has 10 bytes and a full chunk is roughly 165 bytes; for this test we want a + // frame to contain at most 2 chunks. + maxBytesInFrame := 10 + 165*2 + + handler := remoteReadHandler(q, maxBytesInFrame, log.NewNopLogger()) + + requestBody, err := proto.Marshal(&prompb.ReadRequest{ + Queries: []*prompb.Query{queryData.query}, + AcceptedResponseTypes: []prompb.ReadRequest_ResponseType{prompb.ReadRequest_STREAMED_XOR_CHUNKS}, + }) + require.NoError(t, err) + requestBody = snappy.Encode(nil, requestBody) + request, err := http.NewRequest(http.MethodPost, "/api/v1/read", bytes.NewReader(requestBody)) + require.NoError(t, err) + request.Header.Set("X-Prometheus-Remote-Read-Version", "0.1.0") + + recorder := httptest.NewRecorder() + handler.ServeHTTP(recorder, request) + + require.Equal(t, 200, recorder.Result().StatusCode) + require.Equal(t, []string{api.ContentTypeRemoteReadStreamedChunks}, recorder.Result().Header["Content-Type"]) + + stream := prom_remote.NewChunkedReader(recorder.Result().Body, prom_remote.DefaultChunkedReadLimit, nil) + + i := 0 + for { + var res prompb.ChunkedReadResponse + err := stream.NextProto(&res) + if errors.Is(err, io.EOF) { + break + } + require.NoError(t, err) + + if len(testData.expectedResults) < i+1 { + require.Fail(t, "unexpected result message") + } + require.Equal(t, testData.expectedResults[i], &res) + i++ + } + require.Len(t, testData.expectedResults, i) + + // Ensure the time range passed down to the queryable is the expected one. + require.Equal(t, queryData.expectedQueriedStart, actualQueriedStart) + require.Equal(t, queryData.expectedQueriedEnd, actualQueriedEnd) + }) } - require.Len(t, tc.expectedResults, i) }) } } @@ -521,17 +605,20 @@ func TestRemoteReadErrorParsing(t *testing.T) { q := &mockSampleAndChunkQueryable{ queryableFn: func(int64, int64) (storage.Querier, error) { return mockQuerier{ - seriesSet: tc.seriesSet, + selectFn: func(_ context.Context, _ bool, hints *storage.SelectHints, _ ...*labels.Matcher) storage.SeriesSet { + require.NotNil(t, hints, "select hints must be set") + return tc.seriesSet + }, }, tc.getQuerierErr }, } handler := remoteReadHandler(q, 1024*1024, log.NewNopLogger()) - requestBody, err := proto.Marshal(&client.ReadRequest{ - Queries: []*client.QueryRequest{ + requestBody, err := proto.Marshal(&prompb.ReadRequest{ + Queries: []*prompb.Query{ {StartTimestampMs: 0, EndTimestampMs: 10}, }, - AcceptedResponseTypes: []client.ReadRequest_ResponseType{client.SAMPLES}, + AcceptedResponseTypes: []prompb.ReadRequest_ResponseType{prompb.ReadRequest_SAMPLES}, }) require.NoError(t, err) requestBody = snappy.Encode(nil, requestBody) @@ -557,17 +644,19 @@ func TestRemoteReadErrorParsing(t *testing.T) { q := &mockSampleAndChunkQueryable{ chunkQueryableFn: func(int64, int64) (storage.ChunkQuerier, error) { return mockChunkQuerier{ - seriesSet: tc.seriesSet, + selectFn: func(_ context.Context, _ bool, _ *storage.SelectHints, _ ...*labels.Matcher) storage.ChunkSeriesSet { + return storage.NewSeriesSetToChunkSet(tc.seriesSet) + }, }, tc.getQuerierErr }, } handler := remoteReadHandler(q, 1024*1024, log.NewNopLogger()) - requestBody, err := proto.Marshal(&client.ReadRequest{ - Queries: []*client.QueryRequest{ + requestBody, err := proto.Marshal(&prompb.ReadRequest{ + Queries: []*prompb.Query{ {StartTimestampMs: 0, EndTimestampMs: 10}, }, - AcceptedResponseTypes: []client.ReadRequest_ResponseType{client.STREAMED_XOR_CHUNKS}, + AcceptedResponseTypes: []prompb.ReadRequest_ResponseType{prompb.ReadRequest_STREAMED_XOR_CHUNKS}, }) require.NoError(t, err) requestBody = snappy.Encode(nil, requestBody) @@ -587,3 +676,51 @@ func TestRemoteReadErrorParsing(t *testing.T) { } }) } + +func TestQueryFromRemoteReadQuery(t *testing.T) { + tests := map[string]struct { + query *prompb.Query + expectedFrom model.Time + expectedTo model.Time + expectedMatchers []*labels.Matcher + }{ + "remote read request query without hints": { + query: &prompb.Query{ + StartTimestampMs: 1000, + EndTimestampMs: 2000, + Matchers: []*prompb.LabelMatcher{ + {Type: prompb.LabelMatcher_EQ, Name: labels.MetricName, Value: "metric"}, + }, + }, + expectedFrom: 1000, + expectedTo: 2000, + expectedMatchers: []*labels.Matcher{{Type: labels.MatchEqual, Name: labels.MetricName, Value: "metric"}}, + }, + "remote read request query with hints": { + query: &prompb.Query{ + StartTimestampMs: 1000, + EndTimestampMs: 2000, + Matchers: []*prompb.LabelMatcher{ + {Type: prompb.LabelMatcher_EQ, Name: labels.MetricName, Value: "metric"}, + }, + Hints: &prompb.ReadHints{ + StartMs: 500, + EndMs: 1500, + }, + }, + expectedFrom: 1000, // Hints are currently ignored. + expectedTo: 2000, // Hints are currently ignored. + expectedMatchers: []*labels.Matcher{{Type: labels.MatchEqual, Name: labels.MetricName, Value: "metric"}}, + }, + } + + for testName, testData := range tests { + t.Run(testName, func(t *testing.T) { + actualFrom, actualTo, actualMatchers, err := queryFromRemoteReadQuery(testData.query) + require.NoError(t, err) + require.Equal(t, testData.expectedFrom, actualFrom) + require.Equal(t, testData.expectedTo, actualTo) + require.Equal(t, testData.expectedMatchers, actualMatchers) + }) + } +} diff --git a/pkg/ruler/api.go b/pkg/ruler/api.go index d78373129b9..a5c3e227639 100644 --- a/pkg/ruler/api.go +++ b/pkg/ruler/api.go @@ -524,14 +524,14 @@ func (a *API) CreateRuleGroup(w http.ResponseWriter, req *http.Request) { return } - if err := a.ruler.AssertMaxRulesPerRuleGroup(userID, len(rg.Rules)); err != nil { + if err := a.ruler.AssertMaxRulesPerRuleGroup(userID, namespace, len(rg.Rules)); err != nil { level.Error(logger).Log("msg", "limit validation failure", "err", err.Error(), "user", userID) http.Error(w, err.Error(), http.StatusBadRequest) return } - // Only list rule groups when enforcing a max number of groups for this tenant. - if a.ruler.IsMaxRuleGroupsLimited(userID) { + // Only list rule groups when enforcing a max number of groups for this tenant and namespace. + if a.ruler.IsMaxRuleGroupsLimited(userID, namespace) { rgs, err := a.store.ListRuleGroupsForUserAndNamespace(ctx, userID, "") if err != nil { level.Error(logger).Log("msg", "unable to fetch current rule groups for validation", "err", err.Error(), "user", userID) @@ -539,7 +539,7 @@ func (a *API) CreateRuleGroup(w http.ResponseWriter, req *http.Request) { return } - if err := a.ruler.AssertMaxRuleGroups(userID, len(rgs)+1); err != nil { + if err := a.ruler.AssertMaxRuleGroups(userID, namespace, len(rgs)+1); err != nil { level.Error(logger).Log("msg", "limit validation failure", "err", err.Error(), "user", userID) http.Error(w, err.Error(), http.StatusBadRequest) return diff --git a/pkg/ruler/compat.go b/pkg/ruler/compat.go index f069a02f754..3ab872a8b5d 100644 --- a/pkg/ruler/compat.go +++ b/pkg/ruler/compat.go @@ -145,8 +145,8 @@ func (t *PusherAppendable) Appender(ctx context.Context) storage.Appender { type RulesLimits interface { EvaluationDelay(userID string) time.Duration RulerTenantShardSize(userID string) int - RulerMaxRuleGroupsPerTenant(userID string) int - RulerMaxRulesPerRuleGroup(userID string) int + RulerMaxRuleGroupsPerTenant(userID, namespace string) int + RulerMaxRulesPerRuleGroup(userID, namespace string) int RulerRecordingRulesEvaluationEnabled(userID string) bool RulerAlertingRulesEvaluationEnabled(userID string) bool RulerSyncRulesOnChangesEnabled(userID string) bool diff --git a/pkg/ruler/ruler.go b/pkg/ruler/ruler.go index cd3384c353b..bcb90e512c7 100644 --- a/pkg/ruler/ruler.go +++ b/pkg/ruler/ruler.go @@ -1157,15 +1157,15 @@ func (r *Ruler) getLocalRules(ctx context.Context, userID string, req RulesReque } // IsMaxRuleGroupsLimited returns true if there is a limit set for the max -// number of rule groups for the tenant. -func (r *Ruler) IsMaxRuleGroupsLimited(userID string) bool { - return r.limits.RulerMaxRuleGroupsPerTenant(userID) > 0 +// number of rule groups for the tenant and namespace. +func (r *Ruler) IsMaxRuleGroupsLimited(userID, namespace string) bool { + return r.limits.RulerMaxRuleGroupsPerTenant(userID, namespace) > 0 } // AssertMaxRuleGroups limit has not been reached compared to the current // number of total rule groups in input and returns an error if so. -func (r *Ruler) AssertMaxRuleGroups(userID string, rg int) error { - limit := r.limits.RulerMaxRuleGroupsPerTenant(userID) +func (r *Ruler) AssertMaxRuleGroups(userID, namespace string, rg int) error { + limit := r.limits.RulerMaxRuleGroupsPerTenant(userID, namespace) if limit <= 0 { return nil @@ -1179,9 +1179,10 @@ func (r *Ruler) AssertMaxRuleGroups(userID string, rg int) error { } // AssertMaxRulesPerRuleGroup limit has not been reached compared to the current -// number of rules in a rule group in input and returns an error if so. -func (r *Ruler) AssertMaxRulesPerRuleGroup(userID string, rules int) error { - limit := r.limits.RulerMaxRulesPerRuleGroup(userID) +// number of rules in a rule group and namespace combination in input, returns an error if so. +// If the limit is set to 0 (or less), then there is no limit. +func (r *Ruler) AssertMaxRulesPerRuleGroup(userID, namespace string, rules int) error { + limit := r.limits.RulerMaxRulesPerRuleGroup(userID, namespace) if limit <= 0 { return nil diff --git a/pkg/storage/bucket/s3/bucket_client.go b/pkg/storage/bucket/s3/bucket_client.go index 6ef1d5bce6d..06970f00be6 100644 --- a/pkg/storage/bucket/s3/bucket_client.go +++ b/pkg/storage/bucket/s3/bucket_client.go @@ -60,6 +60,7 @@ func newS3Config(cfg Config) (s3.Config, error) { PutUserMetadata: putUserMetadata, SendContentMd5: cfg.SendContentMd5, SSEConfig: sseCfg, + DisableDualstack: !cfg.DualstackEnabled, ListObjectsVersion: cfg.ListObjectsVersion, BucketLookupType: cfg.BucketLookupType, AWSSDKAuth: cfg.NativeAWSAuthEnabled, diff --git a/pkg/storage/bucket/s3/config.go b/pkg/storage/bucket/s3/config.go index 4f81115fee3..c7f79800973 100644 --- a/pkg/storage/bucket/s3/config.go +++ b/pkg/storage/bucket/s3/config.go @@ -122,6 +122,7 @@ type Config struct { SignatureVersion string `yaml:"signature_version" category:"advanced"` ListObjectsVersion string `yaml:"list_objects_version" category:"advanced"` BucketLookupType s3.BucketLookupType `yaml:"bucket_lookup_type" category:"advanced"` + DualstackEnabled bool `yaml:"dualstack_enabled" category:"experimental"` StorageClass string `yaml:"storage_class" category:"experimental"` NativeAWSAuthEnabled bool `yaml:"native_aws_auth_enabled" category:"experimental"` PartSize uint64 `yaml:"part_size" category:"experimental"` @@ -152,6 +153,7 @@ func (cfg *Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) { f.Uint64Var(&cfg.PartSize, prefix+"s3.part-size", 0, "The minimum file size in bytes used for multipart uploads. If 0, the value is optimally computed for each object.") f.BoolVar(&cfg.SendContentMd5, prefix+"s3.send-content-md5", false, "If enabled, a Content-MD5 header is sent with S3 Put Object requests. Consumes more resources to compute the MD5, but may improve compatibility with object storage services that do not support checksums.") f.Var(newBucketLookupTypeValue(s3.AutoLookup, &cfg.BucketLookupType), prefix+"s3.bucket-lookup-type", fmt.Sprintf("Bucket lookup style type, used to access bucket in S3-compatible service. Default is auto. Supported values are: %s.", strings.Join(supportedBucketLookupTypes, ", "))) + f.BoolVar(&cfg.DualstackEnabled, prefix+"s3.dualstack-enabled", true, "When enabled, direct all AWS S3 requests to the dual-stack IPv4/IPv6 endpoint for the configured region.") f.StringVar(&cfg.STSEndpoint, prefix+"s3.sts-endpoint", "", "Accessing S3 resources using temporary, secure credentials provided by AWS Security Token Service.") cfg.SSE.RegisterFlagsWithPrefix(prefix+"s3.sse.", f) cfg.HTTP.RegisterFlagsWithPrefix(prefix, f) diff --git a/pkg/storegateway/bucket_test.go b/pkg/storegateway/bucket_test.go index 372161d040d..45ed0501f77 100644 --- a/pkg/storegateway/bucket_test.go +++ b/pkg/storegateway/bucket_test.go @@ -1071,8 +1071,14 @@ func uploadTestBlock(t testing.TB, tmpDir string, bkt objstore.Bucket, dataSetup func appendTestSeries(series int) func(testing.TB, func() storage.Appender) { return func(t testing.TB, appenderFactory func() storage.Appender) { app := appenderFactory() - addSeries := func(l labels.Labels) { - _, err := app.Append(0, l, 0, 0) + b := labels.NewScratchBuilder(4) + addSeries := func(ss ...string) { + b.Reset() + for i := 0; i < len(ss); i += 2 { + b.Add(ss[i], ss[i+1]) + } + b.Sort() + _, err := app.Append(0, b.Labels(), 0, 0) assert.NoError(t, err) } @@ -1080,12 +1086,12 @@ func appendTestSeries(series int) func(testing.TB, func() storage.Appender) { for n := 0; n < 10; n++ { for i := 0; i < series/10; i++ { - addSeries(labels.FromStrings("i", strconv.Itoa(i)+labelLongSuffix, "n", strconv.Itoa(n)+labelLongSuffix, "j", "foo", "p", "foo")) + addSeries("i", strconv.Itoa(i)+labelLongSuffix, "n", strconv.Itoa(n)+labelLongSuffix, "j", "foo", "p", "foo") // Have some series that won't be matched, to properly test inverted matches. - addSeries(labels.FromStrings("i", strconv.Itoa(i)+labelLongSuffix, "n", strconv.Itoa(n)+labelLongSuffix, "j", "bar", "q", "foo")) - addSeries(labels.FromStrings("i", strconv.Itoa(i)+labelLongSuffix, "n", "0_"+strconv.Itoa(n)+labelLongSuffix, "j", "bar", "r", "foo")) - addSeries(labels.FromStrings("i", strconv.Itoa(i)+labelLongSuffix, "n", "1_"+strconv.Itoa(n)+labelLongSuffix, "j", "bar", "s", "foo")) - addSeries(labels.FromStrings("i", strconv.Itoa(i)+labelLongSuffix, "n", "2_"+strconv.Itoa(n)+labelLongSuffix, "j", "foo", "t", "foo")) + addSeries("i", strconv.Itoa(i)+labelLongSuffix, "n", strconv.Itoa(n)+labelLongSuffix, "j", "bar", "q", "foo") + addSeries("i", strconv.Itoa(i)+labelLongSuffix, "n", "0_"+strconv.Itoa(n)+labelLongSuffix, "j", "bar", "r", "foo") + addSeries("i", strconv.Itoa(i)+labelLongSuffix, "n", "1_"+strconv.Itoa(n)+labelLongSuffix, "j", "bar", "s", "foo") + addSeries("i", strconv.Itoa(i)+labelLongSuffix, "n", "2_"+strconv.Itoa(n)+labelLongSuffix, "j", "foo", "t", "foo") } assert.NoError(t, app.Commit()) app = appenderFactory() diff --git a/pkg/storegateway/series_chunks_test.go b/pkg/storegateway/series_chunks_test.go index 5cdfb6cea43..292bb0942d0 100644 --- a/pkg/storegateway/series_chunks_test.go +++ b/pkg/storegateway/series_chunks_test.go @@ -62,6 +62,7 @@ func TestSeriesChunksSet(t *testing.T) { for r := 0; r < numRuns; r++ { set := newSeriesChunksSet(numSeries, true) + lset := labels.FromStrings(labels.MetricName, "metric") // Ensure the series slice is made of all zero values. Then write something inside before releasing it again. // The slice is expected to be picked from the pool, at least in some runs (there's an assertion on it at // the end of the test). @@ -69,7 +70,7 @@ func TestSeriesChunksSet(t *testing.T) { for i := 0; i < numSeries; i++ { require.Zero(t, set.series[i]) - set.series[i].lset = labels.FromStrings(labels.MetricName, "metric") + set.series[i].lset = lset set.series[i].chks = set.newSeriesAggrChunkSlice(numChunksPerSeries) } diff --git a/pkg/storegateway/series_refs.go b/pkg/storegateway/series_refs.go index 060263ea0d0..fdef254e3bf 100644 --- a/pkg/storegateway/series_refs.go +++ b/pkg/storegateway/series_refs.go @@ -683,6 +683,7 @@ type loadingSeriesChunkRefsSetIterator struct { shard *sharding.ShardSelector seriesHasher seriesHasher strategy seriesIteratorStrategy + builder labels.ScratchBuilder minTime, maxTime int64 tenantID string logger log.Logger @@ -842,6 +843,7 @@ func newLoadingSeriesChunkRefsSetIterator( shard: shard, seriesHasher: seriesHasher, strategy: strategy, + builder: labels.NewScratchBuilder(0), minTime: minTime, maxTime: maxTime, tenantID: tenantID, @@ -1118,12 +1120,8 @@ func (s *loadingSeriesChunkRefsSetIterator) loadSeries(ref storage.SeriesRef, lo func (s *loadingSeriesChunkRefsSetIterator) singlePassStringify(symbolizedSet symbolizedSeriesChunkRefsSet) (seriesChunkRefsSet, error) { // Some conservative map pre-allocation; the goal is to get an order of magnitude size of the map, so we minimize map growth. symbols := make(map[uint32]string, len(symbolizedSet.series)/2) - maxLabelsPerSeries := 0 for _, series := range symbolizedSet.series { - if numLabels := len(series.lset); maxLabelsPerSeries < numLabels { - maxLabelsPerSeries = numLabels - } for _, symRef := range series.lset { symbols[symRef.value] = "" symbols[symRef.name] = "" @@ -1151,15 +1149,14 @@ func (s *loadingSeriesChunkRefsSetIterator) singlePassStringify(symbolizedSet sy // This can be released by the caller because loadingSeriesChunkRefsSetIterator doesn't retain it after Next() is called again. set := newSeriesChunkRefsSet(len(symbolizedSet.series), true) - labelsBuilder := labels.NewScratchBuilder(maxLabelsPerSeries) for _, series := range symbolizedSet.series { - labelsBuilder.Reset() + s.builder.Reset() for _, symRef := range series.lset { - labelsBuilder.Add(symbols[symRef.name], symbols[symRef.value]) + s.builder.Add(symbols[symRef.name], symbols[symRef.value]) } set.series = append(set.series, seriesChunkRefs{ - lset: labelsBuilder.Labels(), + lset: s.builder.Labels(), refs: series.refs, }) } @@ -1171,9 +1168,8 @@ func (s *loadingSeriesChunkRefsSetIterator) multiLookupStringify(symbolizedSet s // This can be released by the caller because loadingSeriesChunkRefsSetIterator doesn't retain it after Next() is called again. set := newSeriesChunkRefsSet(len(symbolizedSet.series), true) - labelsBuilder := labels.NewScratchBuilder(16) for _, series := range symbolizedSet.series { - lset, err := s.indexr.LookupLabelsSymbols(s.ctx, series.lset, &labelsBuilder) + lset, err := s.indexr.LookupLabelsSymbols(s.ctx, series.lset, &s.builder) if err != nil { return seriesChunkRefsSet{}, err } diff --git a/pkg/storegateway/series_refs_test.go b/pkg/storegateway/series_refs_test.go index dad02bd2a2c..aeb21e6ebdf 100644 --- a/pkg/storegateway/series_refs_test.go +++ b/pkg/storegateway/series_refs_test.go @@ -1087,10 +1087,16 @@ func TestLimitingSeriesChunkRefsSetIterator(t *testing.T) { } func TestLoadingSeriesChunkRefsSetIterator(t *testing.T) { + b := labels.NewScratchBuilder(1) + oneLabel := func(name, value string) labels.Labels { + b.Reset() + b.Add(name, value) + return b.Labels() + } defaultTestBlockFactory := prepareTestBlock(test.NewTB(t), func(t testing.TB, appenderFactory func() storage.Appender) { appender := appenderFactory() for i := 0; i < 100; i++ { - _, err := appender.Append(0, labels.FromStrings("l1", fmt.Sprintf("v%d", i)), int64(i*10), 0) + _, err := appender.Append(0, oneLabel("l1", fmt.Sprintf("v%d", i)), int64(i*10), 0) assert.NoError(t, err) } assert.NoError(t, appender.Commit()) @@ -1100,7 +1106,7 @@ func TestLoadingSeriesChunkRefsSetIterator(t *testing.T) { largerTestBlockFactory := prepareTestBlock(test.NewTB(t), func(t testing.TB, appenderFactory func() storage.Appender) { for i := 0; i < largerTestBlockSeriesCount; i++ { appender := appenderFactory() - lbls := labels.FromStrings("l1", fmt.Sprintf("v%d", i)) + lbls := oneLabel("l1", fmt.Sprintf("v%d", i)) var ref storage.SeriesRef const numSamples = 240 // Write enough samples to have two chunks per series for j := 0; j < numSamples; j++ { @@ -1247,7 +1253,7 @@ func TestLoadingSeriesChunkRefsSetIterator(t *testing.T) { expectedSets: func() []seriesChunkRefsSet { set := newSeriesChunkRefsSet(largerTestBlockSeriesCount, true) for i := 0; i < largerTestBlockSeriesCount; i++ { - set.series = append(set.series, seriesChunkRefs{lset: labels.FromStrings("l1", fmt.Sprintf("v%d", i))}) + set.series = append(set.series, seriesChunkRefs{lset: oneLabel("l1", fmt.Sprintf("v%d", i))}) } // The order of series in the block is by their labels, so we need to sort what we generated. sort.Slice(set.series, func(i, j int) bool { @@ -1265,7 +1271,7 @@ func TestLoadingSeriesChunkRefsSetIterator(t *testing.T) { expectedSets: func() []seriesChunkRefsSet { series := make([]seriesChunkRefs, 0, largerTestBlockSeriesCount) for i := 0; i < largerTestBlockSeriesCount; i++ { - series = append(series, seriesChunkRefs{lset: labels.FromStrings("l1", fmt.Sprintf("v%d", i))}) + series = append(series, seriesChunkRefs{lset: oneLabel("l1", fmt.Sprintf("v%d", i))}) } // The order of series in the block is by their labels, so we need to sort what we generated. sort.Slice(series, func(i, j int) bool { @@ -2409,11 +2415,12 @@ func readAllSeriesChunkRefs(it iterator[seriesChunkRefs]) []seriesChunkRefs { // incremented by +1. func createSeriesChunkRefsSet(minSeriesID, maxSeriesID int, releasable bool) seriesChunkRefsSet { set := newSeriesChunkRefsSet(maxSeriesID-minSeriesID+1, releasable) + b := labels.NewScratchBuilder(1) for seriesID := minSeriesID; seriesID <= maxSeriesID; seriesID++ { - set.series = append(set.series, seriesChunkRefs{ - lset: labels.FromStrings(labels.MetricName, fmt.Sprintf("metric_%06d", seriesID)), - }) + b.Reset() + b.Add(labels.MetricName, fmt.Sprintf("metric_%06d", seriesID)) + set.series = append(set.series, seriesChunkRefs{lset: b.Labels()}) } return set diff --git a/pkg/streamingpromql/functions.go b/pkg/streamingpromql/functions.go index b95686c7212..883c35ca8e2 100644 --- a/pkg/streamingpromql/functions.go +++ b/pkg/streamingpromql/functions.go @@ -89,10 +89,32 @@ func createRateFunctionOperator(args []types.Operator, pool *pooling.LimitingPoo // These functions return an instant-vector. var instantVectorFunctionOperatorFactories = map[string]InstantVectorFunctionOperatorFactory{ + "abs": TransformationFunctionOperatorFactory("abs", functions.Abs), "acos": TransformationFunctionOperatorFactory("acos", functions.Acos), + "acosh": TransformationFunctionOperatorFactory("acosh", functions.Acosh), + "asin": TransformationFunctionOperatorFactory("asin", functions.Asin), + "asinh": TransformationFunctionOperatorFactory("asinh", functions.Asinh), + "atan": TransformationFunctionOperatorFactory("atan", functions.Atan), + "atanh": TransformationFunctionOperatorFactory("atanh", functions.Atanh), + "ceil": TransformationFunctionOperatorFactory("ceil", functions.Ceil), + "cos": TransformationFunctionOperatorFactory("cos", functions.Cos), + "cosh": TransformationFunctionOperatorFactory("cosh", functions.Cosh), + "deg": TransformationFunctionOperatorFactory("deg", functions.Deg), + "exp": TransformationFunctionOperatorFactory("exp", functions.Exp), + "floor": TransformationFunctionOperatorFactory("floor", functions.Floor), "histogram_count": TransformationFunctionOperatorFactory("histogram_count", functions.HistogramCount), "histogram_sum": TransformationFunctionOperatorFactory("histogram_sum", functions.HistogramSum), + "ln": TransformationFunctionOperatorFactory("ln", functions.Ln), + "log10": TransformationFunctionOperatorFactory("log10", functions.Log10), + "log2": TransformationFunctionOperatorFactory("log2", functions.Log2), + "rad": TransformationFunctionOperatorFactory("rad", functions.Rad), "rate": createRateFunctionOperator, + "sgn": TransformationFunctionOperatorFactory("sgn", functions.Sgn), + "sin": TransformationFunctionOperatorFactory("sin", functions.Sin), + "sinh": TransformationFunctionOperatorFactory("sinh", functions.Sinh), + "sqrt": TransformationFunctionOperatorFactory("sqrt", functions.Sqrt), + "tan": TransformationFunctionOperatorFactory("tan", functions.Tan), + "tanh": TransformationFunctionOperatorFactory("tanh", functions.Tanh), } func RegisterInstantVectorFunctionOperatorFactory(functionName string, factory InstantVectorFunctionOperatorFactory) error { diff --git a/pkg/streamingpromql/functions/math.go b/pkg/streamingpromql/functions/math.go index e119f3050eb..df86561325a 100644 --- a/pkg/streamingpromql/functions/math.go +++ b/pkg/streamingpromql/functions/math.go @@ -6,4 +6,45 @@ import ( "math" ) +var Abs = FloatTransformationDropHistogramsFunc(math.Abs) var Acos = FloatTransformationDropHistogramsFunc(math.Acos) +var Acosh = FloatTransformationDropHistogramsFunc(math.Acosh) +var Asin = FloatTransformationDropHistogramsFunc(math.Asin) +var Asinh = FloatTransformationDropHistogramsFunc(math.Asinh) +var Atan = FloatTransformationDropHistogramsFunc(math.Atan) +var Atanh = FloatTransformationDropHistogramsFunc(math.Atanh) +var Ceil = FloatTransformationDropHistogramsFunc(math.Ceil) +var Cos = FloatTransformationDropHistogramsFunc(math.Cos) +var Cosh = FloatTransformationDropHistogramsFunc(math.Cosh) +var Exp = FloatTransformationDropHistogramsFunc(math.Exp) +var Floor = FloatTransformationDropHistogramsFunc(math.Floor) +var Ln = FloatTransformationDropHistogramsFunc(math.Log) +var Log10 = FloatTransformationDropHistogramsFunc(math.Log10) +var Log2 = FloatTransformationDropHistogramsFunc(math.Log2) +var Sin = FloatTransformationDropHistogramsFunc(math.Sin) +var Sinh = FloatTransformationDropHistogramsFunc(math.Sinh) +var Sqrt = FloatTransformationDropHistogramsFunc(math.Sqrt) +var Tan = FloatTransformationDropHistogramsFunc(math.Tan) +var Tanh = FloatTransformationDropHistogramsFunc(math.Tanh) + +var Deg = FloatTransformationDropHistogramsFunc(func(f float64) float64 { + return f * 180 / math.Pi +}) + +var Rad = FloatTransformationDropHistogramsFunc(func(f float64) float64 { + return f * math.Pi / 180 +}) + +var Sgn = FloatTransformationDropHistogramsFunc(func(f float64) float64 { + if f < 0 { + return -1 + } + + if f > 0 { + return 1 + } + + // This behaviour is undocumented, but if f is +/- NaN, Prometheus' engine returns that value. + // Otherwise, if the value is 0, we should return 0. + return f +}) diff --git a/pkg/streamingpromql/testdata/ours/functions.test b/pkg/streamingpromql/testdata/ours/functions.test index 321ff9d10dd..a5d689ea4bf 100644 --- a/pkg/streamingpromql/testdata/ours/functions.test +++ b/pkg/streamingpromql/testdata/ours/functions.test @@ -42,9 +42,24 @@ eval range from 0 to 5m step 1m rate(some_metric_with_stale_marker[2m]) clear -# Test simple functions +# Test simple functions not covered by the upstream tests load 1m - some_metric{env="prod"} 0 0.5 NaN + some_metric{env="prod"} 0 0.5 -0.5 NaN -NaN 2.1 -2.1 -eval range from 0 to 2m step 1m acos(some_metric) - {env="prod"} 1.5707963267948966 1.0471975511965976 NaN +eval range from 0 to 4m step 1m abs(some_metric) + {env="prod"} 0 0.5 0.5 NaN NaN + +eval range from 0 to 4m step 1m acos(some_metric) + {env="prod"} 1.5707963267948966 1.0471975511965976 2.0943951023931957 NaN NaN + +eval range from 0 to 4m step 1m asin(some_metric) + {env="prod"} 0 0.5235987755982989 -0.5235987755982989 NaN NaN + +eval range from 0 to 4m step 1m atanh(some_metric) + {env="prod"} 0 0.5493061443340548 -0.5493061443340548 NaN NaN + +eval range from 0 to 6m step 1m ceil(some_metric) + {env="prod"} 0 1 -0 NaN -NaN 3 -2 + +eval range from 0 to 6m step 1m floor(some_metric) + {env="prod"} 0 0 -1 NaN -NaN 2 -3 diff --git a/pkg/streamingpromql/testdata/upstream/functions.test b/pkg/streamingpromql/testdata/upstream/functions.test index 66bb0d8006a..c25d981ca75 100644 --- a/pkg/streamingpromql/testdata/upstream/functions.test +++ b/pkg/streamingpromql/testdata/upstream/functions.test @@ -506,14 +506,13 @@ load 5m test_sgn{src="sgn-e"} 0 test_sgn{src="sgn-f"} 100 -# Unsupported by streaming engine. -# eval instant at 0m sgn(test_sgn) -# {src="sgn-a"} -1 -# {src="sgn-b"} 1 -# {src="sgn-c"} NaN -# {src="sgn-d"} -1 -# {src="sgn-e"} 0 -# {src="sgn-f"} 1 +eval instant at 0m sgn(test_sgn) + {src="sgn-a"} -1 + {src="sgn-b"} 1 + {src="sgn-c"} NaN + {src="sgn-d"} -1 + {src="sgn-e"} 0 + {src="sgn-f"} 1 # Tests for sort/sort_desc. @@ -1353,10 +1352,9 @@ load 5m exp_root_log{l="x"} 10 exp_root_log{l="y"} 20 -# Unsupported by streaming engine. -# eval instant at 5m exp(exp_root_log) -# {l="x"} 22026.465794806718 -# {l="y"} 485165195.4097903 +eval instant at 5m exp(exp_root_log) + {l="x"} 22026.465794806718 + {l="y"} 485165195.4097903 # Unsupported by streaming engine. # eval instant at 5m exp(exp_root_log - 10) @@ -1368,10 +1366,9 @@ load 5m # {l="x"} 4.5399929762484854e-05 # {l="y"} 1 -# Unsupported by streaming engine. -# eval instant at 5m ln(exp_root_log) -# {l="x"} 2.302585092994046 -# {l="y"} 2.995732273553991 +eval instant at 5m ln(exp_root_log) + {l="x"} 2.302585092994046 + {l="y"} 2.995732273553991 # Unsupported by streaming engine. # eval instant at 5m ln(exp_root_log - 10) @@ -1383,20 +1380,17 @@ load 5m # {l="y"} -Inf # {l="x"} NaN -# Unsupported by streaming engine. -# eval instant at 5m exp(ln(exp_root_log)) -# {l="y"} 20 -# {l="x"} 10 +eval instant at 5m exp(ln(exp_root_log)) + {l="y"} 20 + {l="x"} 10 -# Unsupported by streaming engine. -# eval instant at 5m sqrt(exp_root_log) -# {l="x"} 3.1622776601683795 -# {l="y"} 4.47213595499958 +eval instant at 5m sqrt(exp_root_log) + {l="x"} 3.1622776601683795 + {l="y"} 4.47213595499958 -# Unsupported by streaming engine. -# eval instant at 5m log2(exp_root_log) -# {l="x"} 3.3219280948873626 -# {l="y"} 4.321928094887363 +eval instant at 5m log2(exp_root_log) + {l="x"} 3.3219280948873626 + {l="y"} 4.321928094887363 # Unsupported by streaming engine. # eval instant at 5m log2(exp_root_log - 10) @@ -1408,10 +1402,9 @@ load 5m # {l="x"} NaN # {l="y"} -Inf -# Unsupported by streaming engine. -# eval instant at 5m log10(exp_root_log) -# {l="x"} 1 -# {l="y"} 1.301029995663981 +eval instant at 5m log10(exp_root_log) + {l="x"} 1 + {l="y"} 1.301029995663981 # Unsupported by streaming engine. # eval instant at 5m log10(exp_root_log - 10) @@ -1424,3 +1417,12 @@ load 5m # {l="y"} -Inf clear + +# Test that timestamp() handles the scenario where there are more steps than samples. +load 1m + metric 0+1x1000 + +# We expect the value to be 0 for t=0s to t=59s (inclusive), then 60 for t=60s and t=61s. +# Unsupported by streaming engine. +# eval range from 0 to 61s step 1s timestamp(metric) +# {} 0x59 60 60 diff --git a/pkg/streamingpromql/testdata/upstream/range_queries.test b/pkg/streamingpromql/testdata/upstream/range_queries.test new file mode 100644 index 00000000000..e275d261b5d --- /dev/null +++ b/pkg/streamingpromql/testdata/upstream/range_queries.test @@ -0,0 +1,79 @@ +# sum_over_time with all values +load 30s + bar 0 1 10 100 1000 + +# Unsupported by streaming engine. +# eval range from 0 to 2m step 1m sum_over_time(bar[30s]) +# {} 0 11 1100 + +clear + +# sum_over_time with trailing values +load 30s + bar 0 1 10 100 1000 0 0 0 0 + +# Unsupported by streaming engine. +# eval range from 0 to 2m step 1m sum_over_time(bar[30s]) +# {} 0 11 1100 + +clear + +# sum_over_time with all values long +load 30s + bar 0 1 10 100 1000 10000 100000 1000000 10000000 + +# Unsupported by streaming engine. +# eval range from 0 to 4m step 1m sum_over_time(bar[30s]) +# {} 0 11 1100 110000 11000000 + +clear + +# sum_over_time with all values random +load 30s + bar 5 17 42 2 7 905 51 + +# Unsupported by streaming engine. +# eval range from 0 to 3m step 1m sum_over_time(bar[30s]) +# {} 5 59 9 956 + +clear + +# metric query +load 30s + metric 1+1x4 + +eval range from 0 to 2m step 1m metric + metric 1 3 5 + +clear + +# metric query with trailing values +load 30s + metric 1+1x8 + +eval range from 0 to 2m step 1m metric + metric 1 3 5 + +clear + +# short-circuit +load 30s + foo{job="1"} 1+1x4 + bar{job="2"} 1+1x4 + +# Unsupported by streaming engine. +# eval range from 0 to 2m step 1m foo > 2 or bar +# foo{job="1"} _ 3 5 +# bar{job="2"} 1 3 5 + +clear + +# Drop metric name +load 30s + requests{job="1", __address__="bar"} 100 + +# Unsupported by streaming engine. +# eval range from 0 to 2m step 1m requests * 2 +# {job="1", __address__="bar"} 200 200 200 + +clear diff --git a/pkg/streamingpromql/testdata/upstream/trig_functions.test b/pkg/streamingpromql/testdata/upstream/trig_functions.test index ed85edd362e..9aa04996fa6 100644 --- a/pkg/streamingpromql/testdata/upstream/trig_functions.test +++ b/pkg/streamingpromql/testdata/upstream/trig_functions.test @@ -10,23 +10,20 @@ load 5m trig{l="y"} 20 trig{l="NaN"} NaN -# Unsupported by streaming engine. -# eval instant at 5m sin(trig) -# {l="x"} -0.5440211108893699 -# {l="y"} 0.9129452507276277 -# {l="NaN"} NaN +eval instant at 5m sin(trig) + {l="x"} -0.5440211108893699 + {l="y"} 0.9129452507276277 + {l="NaN"} NaN -# Unsupported by streaming engine. -# eval instant at 5m cos(trig) -# {l="x"} -0.8390715290764524 -# {l="y"} 0.40808206181339196 -# {l="NaN"} NaN +eval instant at 5m cos(trig) + {l="x"} -0.8390715290764524 + {l="y"} 0.40808206181339196 + {l="NaN"} NaN -# Unsupported by streaming engine. -# eval instant at 5m tan(trig) -# {l="x"} 0.6483608274590867 -# {l="y"} 2.2371609442247427 -# {l="NaN"} NaN +eval instant at 5m tan(trig) + {l="x"} 0.6483608274590867 + {l="y"} 2.2371609442247427 + {l="NaN"} NaN # Unsupported by streaming engine. # eval instant at 5m asin(trig - 10.1) @@ -40,41 +37,35 @@ load 5m # {l="y"} NaN # {l="NaN"} NaN -# Unsupported by streaming engine. -# eval instant at 5m atan(trig) -# {l="x"} 1.4711276743037345 -# {l="y"} 1.5208379310729538 -# {l="NaN"} NaN +eval instant at 5m atan(trig) + {l="x"} 1.4711276743037345 + {l="y"} 1.5208379310729538 + {l="NaN"} NaN -# Unsupported by streaming engine. -# eval instant at 5m sinh(trig) -# {l="x"} 11013.232920103324 -# {l="y"} 2.4258259770489514e+08 -# {l="NaN"} NaN +eval instant at 5m sinh(trig) + {l="x"} 11013.232920103324 + {l="y"} 2.4258259770489514e+08 + {l="NaN"} NaN -# Unsupported by streaming engine. -# eval instant at 5m cosh(trig) -# {l="x"} 11013.232920103324 -# {l="y"} 2.4258259770489514e+08 -# {l="NaN"} NaN +eval instant at 5m cosh(trig) + {l="x"} 11013.232920103324 + {l="y"} 2.4258259770489514e+08 + {l="NaN"} NaN -# Unsupported by streaming engine. -# eval instant at 5m tanh(trig) -# {l="x"} 0.9999999958776927 -# {l="y"} 1 -# {l="NaN"} NaN +eval instant at 5m tanh(trig) + {l="x"} 0.9999999958776927 + {l="y"} 1 + {l="NaN"} NaN -# Unsupported by streaming engine. -# eval instant at 5m asinh(trig) -# {l="x"} 2.99822295029797 -# {l="y"} 3.6895038689889055 -# {l="NaN"} NaN +eval instant at 5m asinh(trig) + {l="x"} 2.99822295029797 + {l="y"} 3.6895038689889055 + {l="NaN"} NaN -# Unsupported by streaming engine. -# eval instant at 5m acosh(trig) -# {l="x"} 2.993222846126381 -# {l="y"} 3.6882538673612966 -# {l="NaN"} NaN +eval instant at 5m acosh(trig) + {l="x"} 2.993222846126381 + {l="y"} 3.6882538673612966 + {l="NaN"} NaN # Unsupported by streaming engine. # eval instant at 5m atanh(trig - 10.1) @@ -82,11 +73,10 @@ load 5m # {l="y"} NaN # {l="NaN"} NaN -# Unsupported by streaming engine. -# eval instant at 5m rad(trig) -# {l="x"} 0.17453292519943295 -# {l="y"} 0.3490658503988659 -# {l="NaN"} NaN +eval instant at 5m rad(trig) + {l="x"} 0.17453292519943295 + {l="y"} 0.3490658503988659 + {l="NaN"} NaN # Unsupported by streaming engine. # eval instant at 5m rad(trig - 10) @@ -100,11 +90,10 @@ load 5m # {l="y"} 0 # {l="NaN"} NaN -# Unsupported by streaming engine. -# eval instant at 5m deg(trig) -# {l="x"} 572.9577951308232 -# {l="y"} 1145.9155902616465 -# {l="NaN"} NaN +eval instant at 5m deg(trig) + {l="x"} 572.9577951308232 + {l="y"} 1145.9155902616465 + {l="NaN"} NaN # Unsupported by streaming engine. # eval instant at 5m deg(trig - 10) @@ -123,4 +112,4 @@ clear # Unsupported by streaming engine. # eval instant at 0s pi() # 3.141592653589793 -# \ No newline at end of file +# diff --git a/pkg/util/labels.go b/pkg/util/labels.go index 65fbf6e6684..be7f6b40936 100644 --- a/pkg/util/labels.go +++ b/pkg/util/labels.go @@ -8,20 +8,9 @@ package util import ( "strings" - "github.com/prometheus/common/model" "github.com/prometheus/prometheus/model/labels" ) -// LabelsToMetric converts a Labels to Metric -// Don't do this on any performance sensitive paths. -func LabelsToMetric(ls labels.Labels) model.Metric { - m := make(model.Metric, ls.Len()) - ls.Range(func(l labels.Label) { - m[model.LabelName(l.Name)] = model.LabelValue(l.Value) - }) - return m -} - // LabelMatchersToString returns a string representing the input label matchers. func LabelMatchersToString(matchers []*labels.Matcher) string { out := strings.Builder{} diff --git a/pkg/util/test/shape.go b/pkg/util/test/shape.go index 2afae831ab7..2e5149c2450 100644 --- a/pkg/util/test/shape.go +++ b/pkg/util/test/shape.go @@ -5,6 +5,7 @@ package test import ( "fmt" "reflect" + "strings" "testing" "github.com/stretchr/testify/require" @@ -20,32 +21,36 @@ const ignoredFieldName = "" // but we also check the names are the same here to ensure there's no confusion // (eg. two bool fields swapped) when ignoreName is false. However, when you // know the names are different, you can set ignoreName to true. -func RequireSameShape(t *testing.T, expectedType, actualType any, ignoreName bool) { - expectedFormatted := prettyPrintType(reflect.TypeOf(expectedType), ignoreName) - actualFormatted := prettyPrintType(reflect.TypeOf(actualType), ignoreName) +// The ignoreXXXPrefix flag is used to ignore fields with the XXX_ prefix. +func RequireSameShape(t *testing.T, expectedType, actualType any, ignoreName bool, ignoreXXXPrefix bool) { + expectedFormatted := prettyPrintType(reflect.TypeOf(expectedType), ignoreName, ignoreXXXPrefix) + actualFormatted := prettyPrintType(reflect.TypeOf(actualType), ignoreName, ignoreXXXPrefix) require.Equal(t, expectedFormatted, actualFormatted) } -func prettyPrintType(t reflect.Type, ignoreName bool) string { +func prettyPrintType(t reflect.Type, ignoreName bool, ignoreXXXPrefix bool) string { if t.Kind() != reflect.Struct { panic(fmt.Sprintf("expected %s to be a struct but is %s", t.Name(), t.Kind())) } tree := treeprint.NewWithRoot("") - addTypeToTree(t, tree, ignoreName) + addTypeToTree(t, tree, ignoreName, ignoreXXXPrefix) return tree.String() } -func addTypeToTree(t reflect.Type, tree treeprint.Tree, ignoreName bool) { +func addTypeToTree(t reflect.Type, tree treeprint.Tree, ignoreName bool, ignoreXXXPrefix bool) { if t.Kind() == reflect.Pointer { fieldName := t.Name() + if ignoreXXXPrefix && strings.HasPrefix(fieldName, "XXX_") { + return + } if ignoreName { fieldName = ignoredFieldName } name := fmt.Sprintf("%s: *%s", fieldName, t.Elem().Kind()) - addTypeToTree(t.Elem(), tree.AddBranch(name), ignoreName) + addTypeToTree(t.Elem(), tree.AddBranch(name), ignoreName, ignoreXXXPrefix) return } @@ -56,6 +61,9 @@ func addTypeToTree(t reflect.Type, tree treeprint.Tree, ignoreName bool) { for i := 0; i < t.NumField(); i++ { f := t.Field(i) fieldName := f.Name + if ignoreXXXPrefix && strings.HasPrefix(fieldName, "XXX_") { + return + } if ignoreName { fieldName = ignoredFieldName } @@ -63,14 +71,14 @@ func addTypeToTree(t reflect.Type, tree treeprint.Tree, ignoreName bool) { switch f.Type.Kind() { case reflect.Pointer: name := fmt.Sprintf("+%v %s: *%s", f.Offset, fieldName, f.Type.Elem().Kind()) - addTypeToTree(f.Type.Elem(), tree.AddBranch(name), ignoreName) + addTypeToTree(f.Type.Elem(), tree.AddBranch(name), ignoreName, ignoreXXXPrefix) case reflect.Slice: name := fmt.Sprintf("+%v %s: []%s", f.Offset, fieldName, f.Type.Elem().Kind()) if isPrimitive(f.Type.Elem().Kind()) { tree.AddNode(name) } else { - addTypeToTree(f.Type.Elem(), tree.AddBranch(name), ignoreName) + addTypeToTree(f.Type.Elem(), tree.AddBranch(name), ignoreName, ignoreXXXPrefix) } default: name := fmt.Sprintf("+%v %s: %s", f.Offset, fieldName, f.Type.Kind()) diff --git a/pkg/util/validation/limits.go b/pkg/util/validation/limits.go index 65582147357..d85bf3b6710 100644 --- a/pkg/util/validation/limits.go +++ b/pkg/util/validation/limits.go @@ -178,13 +178,15 @@ type Limits struct { ActiveSeriesResultsMaxSizeBytes int `yaml:"active_series_results_max_size_bytes" json:"active_series_results_max_size_bytes" category:"experimental"` // Ruler defaults and limits. - RulerEvaluationDelay model.Duration `yaml:"ruler_evaluation_delay_duration" json:"ruler_evaluation_delay_duration"` - RulerTenantShardSize int `yaml:"ruler_tenant_shard_size" json:"ruler_tenant_shard_size"` - RulerMaxRulesPerRuleGroup int `yaml:"ruler_max_rules_per_rule_group" json:"ruler_max_rules_per_rule_group"` - RulerMaxRuleGroupsPerTenant int `yaml:"ruler_max_rule_groups_per_tenant" json:"ruler_max_rule_groups_per_tenant"` - RulerRecordingRulesEvaluationEnabled bool `yaml:"ruler_recording_rules_evaluation_enabled" json:"ruler_recording_rules_evaluation_enabled"` - RulerAlertingRulesEvaluationEnabled bool `yaml:"ruler_alerting_rules_evaluation_enabled" json:"ruler_alerting_rules_evaluation_enabled"` - RulerSyncRulesOnChangesEnabled bool `yaml:"ruler_sync_rules_on_changes_enabled" json:"ruler_sync_rules_on_changes_enabled" category:"advanced"` + RulerEvaluationDelay model.Duration `yaml:"ruler_evaluation_delay_duration" json:"ruler_evaluation_delay_duration"` + RulerTenantShardSize int `yaml:"ruler_tenant_shard_size" json:"ruler_tenant_shard_size"` + RulerMaxRulesPerRuleGroup int `yaml:"ruler_max_rules_per_rule_group" json:"ruler_max_rules_per_rule_group"` + RulerMaxRuleGroupsPerTenant int `yaml:"ruler_max_rule_groups_per_tenant" json:"ruler_max_rule_groups_per_tenant"` + RulerRecordingRulesEvaluationEnabled bool `yaml:"ruler_recording_rules_evaluation_enabled" json:"ruler_recording_rules_evaluation_enabled"` + RulerAlertingRulesEvaluationEnabled bool `yaml:"ruler_alerting_rules_evaluation_enabled" json:"ruler_alerting_rules_evaluation_enabled"` + RulerSyncRulesOnChangesEnabled bool `yaml:"ruler_sync_rules_on_changes_enabled" json:"ruler_sync_rules_on_changes_enabled" category:"advanced"` + RulerMaxRulesPerRuleGroupByNamespace LimitsMap[int] `yaml:"ruler_max_rules_per_rule_group_by_namespace" json:"ruler_max_rules_per_rule_group_by_namespace" category:"experimental"` + RulerMaxRuleGroupsPerTenantByNamespace LimitsMap[int] `yaml:"ruler_max_rule_groups_per_tenant_by_namespace" json:"ruler_max_rule_groups_per_tenant_by_namespace" category:"experimental"` // Store-gateway. StoreGatewayTenantShardSize int `yaml:"store_gateway_tenant_shard_size" json:"store_gateway_tenant_shard_size"` @@ -210,8 +212,8 @@ type Limits struct { AlertmanagerReceiversBlockCIDRNetworks flagext.CIDRSliceCSV `yaml:"alertmanager_receivers_firewall_block_cidr_networks" json:"alertmanager_receivers_firewall_block_cidr_networks"` AlertmanagerReceiversBlockPrivateAddresses bool `yaml:"alertmanager_receivers_firewall_block_private_addresses" json:"alertmanager_receivers_firewall_block_private_addresses"` - NotificationRateLimit float64 `yaml:"alertmanager_notification_rate_limit" json:"alertmanager_notification_rate_limit"` - NotificationRateLimitPerIntegration NotificationRateLimitMap `yaml:"alertmanager_notification_rate_limit_per_integration" json:"alertmanager_notification_rate_limit_per_integration"` + NotificationRateLimit float64 `yaml:"alertmanager_notification_rate_limit" json:"alertmanager_notification_rate_limit"` + NotificationRateLimitPerIntegration LimitsMap[float64] `yaml:"alertmanager_notification_rate_limit_per_integration" json:"alertmanager_notification_rate_limit_per_integration"` AlertmanagerMaxConfigSizeBytes int `yaml:"alertmanager_max_config_size_bytes" json:"alertmanager_max_config_size_bytes"` AlertmanagerMaxSilencesCount int `yaml:"alertmanager_max_silences_count" json:"alertmanager_max_silences_count"` @@ -279,7 +281,7 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) { f.IntVar(&l.MaxFetchedChunkBytesPerQuery, MaxChunkBytesPerQueryFlag, 0, "The maximum size of all chunks in bytes that a query can fetch from ingesters and store-gateways. This limit is enforced in the querier and ruler. 0 to disable.") f.Uint64Var(&l.MaxEstimatedMemoryConsumptionPerQuery, MaxEstimatedMemoryConsumptionPerQueryFlag, 0, "The maximum estimated memory a single query can consume at once, in bytes. This limit is only enforced when Mimir's query engine is in use. This limit is enforced in the querier. 0 to disable.") f.Var(&l.MaxPartialQueryLength, MaxPartialQueryLengthFlag, "Limit the time range for partial queries at the querier level.") - f.Var(&l.MaxQueryLookback, "querier.max-query-lookback", "Limit how long back data (series and metadata) can be queried, up until duration ago. This limit is enforced in the query-frontend, querier and ruler. If the requested time range is outside the allowed range, the request will not fail but will be manipulated to only query data within the allowed time range. 0 to disable.") + f.Var(&l.MaxQueryLookback, "querier.max-query-lookback", "Limit how long back data (series and metadata) can be queried, up until duration ago. This limit is enforced in the query-frontend, querier and ruler for instant, range and remote read queries. For metadata queries like series, label names, label values queries the limit is enforced in the querier and ruler. If the requested time range is outside the allowed range, the request will not fail but will be manipulated to only query data within the allowed time range. 0 to disable.") f.IntVar(&l.MaxQueryParallelism, "querier.max-query-parallelism", 14, "Maximum number of split (by time) or partial (by shard) queries that will be scheduled in parallel by the query-frontend for a single input query. This limit is introduced to have a fairer query scheduling and avoid a single query over a large time range saturating all available queriers.") f.Var(&l.MaxLabelsQueryLength, "store.max-labels-query-length", "Limit the time range (end - start time) of series, label names and values queries. This limit is enforced in the querier. If the requested time range is outside the allowed range, the request will not fail but will be manipulated to only query data within the allowed time range. 0 to disable.") f.IntVar(&l.LabelNamesAndValuesResultsMaxSizeBytes, "querier.label-names-and-values-results-max-size-bytes", 400*1024*1024, "Maximum size in bytes of distinct label names and values. When querier receives response from ingester, it merges the response with responses from other ingesters. This maximum size limit is applied to the merged(distinct) results. If the limit is reached, an error is returned.") @@ -305,8 +307,18 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) { f.BoolVar(&l.RulerRecordingRulesEvaluationEnabled, "ruler.recording-rules-evaluation-enabled", true, "Controls whether recording rules evaluation is enabled. This configuration option can be used to forcefully disable recording rules evaluation on a per-tenant basis.") f.BoolVar(&l.RulerAlertingRulesEvaluationEnabled, "ruler.alerting-rules-evaluation-enabled", true, "Controls whether alerting rules evaluation is enabled. This configuration option can be used to forcefully disable alerting rules evaluation on a per-tenant basis.") f.BoolVar(&l.RulerSyncRulesOnChangesEnabled, "ruler.sync-rules-on-changes-enabled", true, "True to enable a re-sync of the configured rule groups as soon as they're changed via ruler's config API. This re-sync is in addition of the periodic syncing. When enabled, it may take up to few tens of seconds before a configuration change triggers the re-sync.") + // Needs to be initialised to a value so that the documentation can pick up the default value of `{}` because this is set as JSON from the command-line. + if !l.RulerMaxRulesPerRuleGroupByNamespace.IsInitialized() { + l.RulerMaxRulesPerRuleGroupByNamespace = NewLimitsMap[int](nil) + } + f.Var(&l.RulerMaxRulesPerRuleGroupByNamespace, "ruler.max-rules-per-rule-group-by-namespace", "Maximum number of rules per rule group by namespace. Value is a map, where each key is the namespace and value is the number of rules allowed in the namespace (int). On the command line, this map is given in a JSON format. The number of rules specified has the same meaning as -ruler.max-rules-per-rule-group, but only applies for the specific namespace. If specified, it supersedes -ruler.max-rules-per-rule-group.") + + if !l.RulerMaxRuleGroupsPerTenantByNamespace.IsInitialized() { + l.RulerMaxRuleGroupsPerTenantByNamespace = NewLimitsMap[int](nil) + } + f.Var(&l.RulerMaxRuleGroupsPerTenantByNamespace, "ruler.max-rule-groups-per-tenant-by-namespace", "Maximum number of rule groups per tenant by namespace. Value is a map, where each key is the namespace and value is the number of rule groups allowed in the namespace (int). On the command line, this map is given in a JSON format. The number of rule groups specified has the same meaning as -ruler.max-rule-groups-per-tenant, but only applies for the specific namespace. If specified, it supersedes -ruler.max-rule-groups-per-tenant.") - f.Var(&l.CompactorBlocksRetentionPeriod, "compactor.blocks-retention-period", "Delete blocks containing samples older than the specified retention period. Also used by query-frontend to avoid querying beyond the retention period. 0 to disable.") + f.Var(&l.CompactorBlocksRetentionPeriod, "compactor.blocks-retention-period", "Delete blocks containing samples older than the specified retention period. Also used by query-frontend to avoid querying beyond the retention period by instant, range or remote read queries. 0 to disable.") f.IntVar(&l.CompactorSplitAndMergeShards, "compactor.split-and-merge-shards", 0, "The number of shards to use when splitting blocks. 0 to disable splitting.") f.IntVar(&l.CompactorSplitGroups, "compactor.split-groups", 1, "Number of groups that blocks for splitting should be grouped into. Each group of blocks is then split separately. Number of output split shards is controlled by -compactor.split-and-merge-shards.") f.IntVar(&l.CompactorTenantShardSize, "compactor.compactor-tenant-shard-size", 0, "Max number of compactors that can compact blocks for single tenant. 0 to disable the limit and use all compactors.") @@ -318,7 +330,7 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) { f.Int64Var(&l.CompactorBlockUploadMaxBlockSizeBytes, "compactor.block-upload-max-block-size-bytes", 0, "Maximum size in bytes of a block that is allowed to be uploaded or validated. 0 = no limit.") // Query-frontend. - f.Var(&l.MaxTotalQueryLength, MaxTotalQueryLengthFlag, "Limit the total query time range (end - start time). This limit is enforced in the query-frontend on the received query.") + f.Var(&l.MaxTotalQueryLength, MaxTotalQueryLengthFlag, "Limit the total query time range (end - start time). This limit is enforced in the query-frontend on the received instant, range or remote read query.") _ = l.ResultsCacheTTL.Set("7d") f.Var(&l.ResultsCacheTTL, resultsCacheTTLFlag, fmt.Sprintf("Time to live duration for cached query results. If query falls into out-of-order time window, -%s is used instead.", resultsCacheTTLForOutOfOrderWindowFlag)) _ = l.ResultsCacheTTLForOutOfOrderTimeWindow.Set("10m") @@ -326,7 +338,7 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) { f.Var(&l.ResultsCacheTTLForCardinalityQuery, "query-frontend.results-cache-ttl-for-cardinality-query", "Time to live duration for cached cardinality query results. The value 0 disables the cache.") f.Var(&l.ResultsCacheTTLForLabelsQuery, "query-frontend.results-cache-ttl-for-labels-query", "Time to live duration for cached label names and label values query results. The value 0 disables the cache.") f.BoolVar(&l.ResultsCacheForUnalignedQueryEnabled, "query-frontend.cache-unaligned-requests", false, "Cache requests that are not step-aligned.") - f.IntVar(&l.MaxQueryExpressionSizeBytes, MaxQueryExpressionSizeBytesFlag, 0, "Max size of the raw query, in bytes. 0 to not apply a limit to the size of the query.") + f.IntVar(&l.MaxQueryExpressionSizeBytes, MaxQueryExpressionSizeBytesFlag, 0, "Max size of the raw query, in bytes. This limit is enforced by the query-frontend for instant, range and remote read queries. 0 to not apply a limit to the size of the query.") f.BoolVar(&l.AlignQueriesWithStep, alignQueriesWithStepFlag, false, "Mutate incoming queries to align their start and end with their step to improve result caching.") // Store-gateway. @@ -338,8 +350,9 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) { f.Float64Var(&l.NotificationRateLimit, "alertmanager.notification-rate-limit", 0, "Per-tenant rate limit for sending notifications from Alertmanager in notifications/sec. 0 = rate limit disabled. Negative value = no notifications are allowed.") - if l.NotificationRateLimitPerIntegration == nil { - l.NotificationRateLimitPerIntegration = NotificationRateLimitMap{} + // Needs to be initialised to a value so that the documentation can pick up the default value of `{}` because this is set as JSON from the command-line. + if !l.NotificationRateLimitPerIntegration.IsInitialized() { + l.NotificationRateLimitPerIntegration = NotificationRateLimitMap() } f.Var(&l.NotificationRateLimitPerIntegration, "alertmanager.notification-rate-limit-per-integration", "Per-integration notification rate limits. Value is a map, where each key is integration name and value is a rate-limit (float). On command line, this map is given in JSON format. Rate limit has the same meaning as -alertmanager.notification-rate-limit, but only applies for specific integration. Allowed integration names: "+strings.Join(allowedIntegrationNames, ", ")+".") f.IntVar(&l.AlertmanagerMaxConfigSizeBytes, "alertmanager.max-config-size-bytes", 0, "Maximum size of configuration file for Alertmanager that tenant can upload via Alertmanager API. 0 = no limit.") @@ -378,8 +391,11 @@ func (l *Limits) unmarshal(decode func(any) error) error { // We want to set l to the defaults and then overwrite it with the input. if defaultLimits != nil { *l = *defaultLimits + // Make copy of default limits, otherwise unmarshalling would modify map in default limits. - l.copyNotificationIntegrationLimits(defaultLimits.NotificationRateLimitPerIntegration) + l.NotificationRateLimitPerIntegration = defaultLimits.NotificationRateLimitPerIntegration.Clone() + l.RulerMaxRulesPerRuleGroupByNamespace = defaultLimits.RulerMaxRulesPerRuleGroupByNamespace.Clone() + l.RulerMaxRuleGroupsPerTenantByNamespace = defaultLimits.RulerMaxRuleGroupsPerTenantByNamespace.Clone() } // Decode into a reflection-crafted struct that has fields for the extensions. @@ -428,13 +444,6 @@ func (l *Limits) validate() error { return nil } -func (l *Limits) copyNotificationIntegrationLimits(defaults NotificationRateLimitMap) { - l.NotificationRateLimitPerIntegration = make(map[string]float64, len(defaults)) - for k, v := range defaults { - l.NotificationRateLimitPerIntegration[k] = v - } -} - // When we load YAML from disk, we want the various per-customer limits // to default to any values specified on the command line, not default // command line values. This global contains those values. I (Tom) cannot @@ -831,13 +840,35 @@ func (o *Overrides) RulerTenantShardSize(userID string) int { } // RulerMaxRulesPerRuleGroup returns the maximum number of rules per rule group for a given user. -func (o *Overrides) RulerMaxRulesPerRuleGroup(userID string) int { - return o.getOverridesForUser(userID).RulerMaxRulesPerRuleGroup +// This limit is special. Limits are returned in the following order: +// 1. Per tenant limit for the given namespace. +// 2. Default limit for the given namespace. +// 3. Per tenant limit set by RulerMaxRulesPerRuleGroup +// 4. Default limit set by RulerMaxRulesPerRuleGroup +func (o *Overrides) RulerMaxRulesPerRuleGroup(userID, namespace string) int { + u := o.getOverridesForUser(userID) + + if namespaceLimit, ok := u.RulerMaxRulesPerRuleGroupByNamespace.data[namespace]; ok { + return namespaceLimit + } + + return u.RulerMaxRulesPerRuleGroup } // RulerMaxRuleGroupsPerTenant returns the maximum number of rule groups for a given user. -func (o *Overrides) RulerMaxRuleGroupsPerTenant(userID string) int { - return o.getOverridesForUser(userID).RulerMaxRuleGroupsPerTenant +// This limit is special. Limits are returned in the following order: +// 1. Per tenant limit for the given namespace. +// 2. Default limit for the given namespace. +// 3. Per tenant limit set by RulerMaxRuleGroupsPerTenant +// 4. Default limit set by RulerMaxRuleGroupsPerTenant +func (o *Overrides) RulerMaxRuleGroupsPerTenant(userID, namespace string) int { + u := o.getOverridesForUser(userID) + + if namespaceLimit, ok := u.RulerMaxRuleGroupsPerTenantByNamespace.data[namespace]; ok { + return namespaceLimit + } + + return u.RulerMaxRuleGroupsPerTenant } // RulerRecordingRulesEvaluationEnabled returns whether the recording rules evaluation is enabled for a given user. @@ -899,7 +930,7 @@ func (o *Overrides) AlertmanagerReceiversBlockPrivateAddresses(user string) bool // 4. default limits func (o *Overrides) getNotificationLimitForUser(user, integration string) float64 { u := o.getOverridesForUser(user) - if n, ok := u.NotificationRateLimitPerIntegration[integration]; ok { + if n, ok := u.NotificationRateLimitPerIntegration.data[integration]; ok { return n } diff --git a/pkg/util/validation/limits_map.go b/pkg/util/validation/limits_map.go new file mode 100644 index 00000000000..4d991699e0c --- /dev/null +++ b/pkg/util/validation/limits_map.go @@ -0,0 +1,101 @@ +// SPDX-License-Identifier: AGPL-3.0-only + +package validation + +import ( + "encoding/json" + "fmt" + + "gopkg.in/yaml.v3" +) + +// LimitsMap is a generic map that can hold either float64 or int as values. +type LimitsMap[T float64 | int] struct { + data map[string]T + validator func(k string, v T) error +} + +func NewLimitsMap[T float64 | int](validator func(k string, v T) error) LimitsMap[T] { + return LimitsMap[T]{ + data: make(map[string]T), + validator: validator, + } +} + +// IsInitialized returns true if the map is initialized. +func (m LimitsMap[T]) IsInitialized() bool { + return m.data != nil +} + +// String implements flag.Value +func (m LimitsMap[T]) String() string { + out, err := json.Marshal(m.data) + if err != nil { + return fmt.Sprintf("failed to marshal: %v", err) + } + return string(out) +} + +// Set implements flag.Value +func (m LimitsMap[T]) Set(s string) error { + newMap := make(map[string]T) + if err := json.Unmarshal([]byte(s), &newMap); err != nil { + return err + } + return m.updateMap(newMap) +} + +// Clone returns a copy of the LimitsMap. +func (m LimitsMap[T]) Clone() LimitsMap[T] { + newMap := make(map[string]T, len(m.data)) + for k, v := range m.data { + newMap[k] = v + } + return LimitsMap[T]{data: newMap, validator: m.validator} +} + +// UnmarshalYAML implements yaml.Unmarshaler. +func (m LimitsMap[T]) UnmarshalYAML(value *yaml.Node) error { + newMap := make(map[string]T) + if err := value.DecodeWithOptions(newMap, yaml.DecodeOptions{KnownFields: true}); err != nil { + return err + } + return m.updateMap(newMap) +} + +func (m LimitsMap[T]) updateMap(newMap map[string]T) error { + // Validate first, as we don't want to allow partial updates. + if m.validator != nil { + for k, v := range newMap { + if err := m.validator(k, v); err != nil { + return err + } + } + } + + for k, v := range newMap { + m.data[k] = v + } + + return nil +} + +// MarshalYAML implements yaml.Marshaler. +func (m LimitsMap[T]) MarshalYAML() (interface{}, error) { + return m.data, nil +} + +// Equal compares two LimitsMap. This is needed to allow cmp.Equal to compare two LimitsMap. +func (m LimitsMap[T]) Equal(other LimitsMap[T]) bool { + if len(m.data) != len(other.data) { + return false + } + + for k, v := range m.data { + if other.data[k] != v { + return false + } + } + + return true +} diff --git a/pkg/util/validation/limits_map_test.go b/pkg/util/validation/limits_map_test.go new file mode 100644 index 00000000000..b2a4820772f --- /dev/null +++ b/pkg/util/validation/limits_map_test.go @@ -0,0 +1,217 @@ +// SPDX-License-Identifier: AGPL-3.0-only + +package validation + +import ( + "errors" + "testing" + + "github.com/google/go-cmp/cmp" + "github.com/stretchr/testify/require" + "gopkg.in/yaml.v3" +) + +var fakeValidator = func(_ string, v float64) error { + if v < 0 { + return errors.New("value cannot be negative") + } + return nil +} + +func TestNewLimitsMap(t *testing.T) { + lm := NewLimitsMap(fakeValidator) + lm.data["key1"] = 10 + require.Len(t, lm.data, 1) +} + +func TestLimitsMap_IsNil(t *testing.T) { + tc := map[string]struct { + input LimitsMap[float64] + expected bool + }{ + + "when the map is initialised": { + input: LimitsMap[float64]{data: map[string]float64{"key1": 10}}, + expected: true, + }, + "when the map is not initialised": { + input: LimitsMap[float64]{data: nil}, + expected: false, + }, + } + + for name, tt := range tc { + t.Run(name, func(t *testing.T) { + require.Equal(t, tt.input.IsInitialized(), tt.expected) + }) + } +} + +func TestLimitsMap_SetAndString(t *testing.T) { + tc := map[string]struct { + input string + expected map[string]float64 + error string + }{ + + "set without error": { + input: `{"key1":10,"key2":20}`, + expected: map[string]float64{"key1": 10, "key2": 20}, + }, + "set with parsing error": { + input: `{"key1": 10, "key2": 20`, + error: "unexpected end of JSON input", + }, + "set with validation error": { + input: `{"key1": -10, "key2": 20}`, + error: "value cannot be negative", + }, + } + + for name, tt := range tc { + t.Run(name, func(t *testing.T) { + lm := NewLimitsMap(fakeValidator) + err := lm.Set(tt.input) + if tt.error != "" { + require.Error(t, err) + require.Equal(t, tt.error, err.Error()) + } else { + require.NoError(t, err) + require.Equal(t, tt.expected, lm.data) + require.Equal(t, tt.input, lm.String()) + } + }) + } +} + +func TestLimitsMap_UnmarshalYAML(t *testing.T) { + tc := []struct { + name string + input string + expected map[string]float64 + error string + }{ + { + name: "unmarshal without error", + input: ` +key1: 10 +key2: 20 +`, + expected: map[string]float64{"key1": 10, "key2": 20}, + }, + { + name: "unmarshal with validation error", + input: ` +key1: -10 +key2: 20 +`, + error: "value cannot be negative", + }, + { + name: "unmarshal with parsing error", + input: ` +key1: 10 +key2: 20 + key3: 30 +`, + error: "yaml: line 3: found a tab character that violates indentation", + }, + } + + for _, tt := range tc { + t.Run(tt.name, func(t *testing.T) { + lm := NewLimitsMap(fakeValidator) + err := yaml.Unmarshal([]byte(tt.input), &lm) + if tt.error != "" { + require.Error(t, err) + require.Equal(t, tt.error, err.Error()) + } else { + require.NoError(t, err) + require.Equal(t, tt.expected, lm.data) + } + }) + } +} + +func TestLimitsMap_MarshalYAML(t *testing.T) { + lm := NewLimitsMap(fakeValidator) + lm.data["key1"] = 10 + lm.data["key2"] = 20 + + out, err := yaml.Marshal(&lm) + require.NoError(t, err) + require.Equal(t, "key1: 10\nkey2: 20\n", string(out)) +} + +func TestLimitsMap_Equal(t *testing.T) { + tc := map[string]struct { + map1 LimitsMap[float64] + map2 LimitsMap[float64] + expected bool + }{ + "Equal maps with same key-value pairs": { + map1: LimitsMap[float64]{data: map[string]float64{"key1": 1.1, "key2": 2.2}}, + map2: LimitsMap[float64]{data: map[string]float64{"key1": 1.1, "key2": 2.2}}, + expected: true, + }, + "Different maps with different lengths": { + map1: LimitsMap[float64]{data: map[string]float64{"key1": 1.1}}, + map2: LimitsMap[float64]{data: map[string]float64{"key1": 1.1, "key2": 2.2}}, + expected: false, + }, + "Different maps with same keys but different values": { + map1: LimitsMap[float64]{data: map[string]float64{"key1": 1.1}}, + map2: LimitsMap[float64]{data: map[string]float64{"key1": 1.2}}, + expected: false, + }, + "Equal empty maps": { + map1: LimitsMap[float64]{data: map[string]float64{}}, + map2: LimitsMap[float64]{data: map[string]float64{}}, + expected: true, + }, + } + + for name, tt := range tc { + t.Run(name, func(t *testing.T) { + require.Equal(t, tt.expected, tt.map1.Equal(LimitsMap[float64]{data: tt.map2.data})) + require.Equal(t, tt.expected, cmp.Equal(tt.map1, tt.map2)) + }) + } +} + +func TestLimitsMap_Clone(t *testing.T) { + // Create an initial LimitsMap with some data. + original := NewLimitsMap[float64](fakeValidator) + original.data["limit1"] = 1.0 + original.data["limit2"] = 2.0 + + // Clone the original LimitsMap. + cloned := original.Clone() + + // Check that the cloned LimitsMap is equal to the original. + require.True(t, original.Equal(cloned), "expected cloned LimitsMap to be different from original") + + // Modify the original LimitsMap and ensure the cloned map is not affected. + original.data["limit1"] = 10.0 + require.False(t, cloned.data["limit1"] == 10.0, "expected cloned LimitsMap to be unaffected by changes to original") + + // Modify the cloned LimitsMap and ensure the original map is not affected. + cloned.data["limit3"] = 3.0 + _, exists := original.data["limit3"] + require.False(t, exists, "expected original LimitsMap to be unaffected by changes to cloned") +} + +func TestLimitsMap_updateMap(t *testing.T) { + initialData := map[string]float64{"a": 1.0, "b": 2.0} + updateData := map[string]float64{"a": 3.0, "b": -3.0, "c": 5.0} + + limitsMap := LimitsMap[float64]{data: initialData, validator: fakeValidator} + + err := limitsMap.updateMap(updateData) + require.Error(t, err) + + // Verify that no partial updates were applied. + // Because maps in Go are accessed in random order, there's a chance that the validation will fail on the first invalid element of the map thus not asserting partial updates. + expectedData := map[string]float64{"a": 1.0, "b": 2.0} + require.Equal(t, expectedData, limitsMap.data) +} diff --git a/pkg/util/validation/limits_test.go b/pkg/util/validation/limits_test.go index 422ebf2a8f0..5ac5e60a5de 100644 --- a/pkg/util/validation/limits_test.go +++ b/pkg/util/validation/limits_test.go @@ -13,6 +13,7 @@ import ( "testing" "time" + "github.com/google/go-cmp/cmp" "github.com/grafana/dskit/flagext" "github.com/pkg/errors" "github.com/prometheus/common/model" @@ -127,7 +128,7 @@ max_partial_query_length: 1s err = json.Unmarshal([]byte(inputJSON), &limitsJSON) require.NoError(t, err, "expected to be able to unmarshal from JSON") - assert.Equal(t, limitsYAML, limitsJSON) + assert.True(t, cmp.Equal(limitsYAML, limitsJSON, cmp.AllowUnexported(Limits{})), "expected YAML and JSON to match") } func TestLimitsAlwaysUsesPromDuration(t *testing.T) { @@ -586,6 +587,334 @@ testuser: } } +func TestRulerMaxRulesPerRuleGroupLimits(t *testing.T) { + tc := map[string]struct { + inputYAML string + expectedLimit int + expectedNamespace string + }{ + "no namespace specific limit": { + inputYAML: ` +ruler_max_rules_per_rule_group: 100 +`, + expectedLimit: 100, + expectedNamespace: "mynamespace", + }, + "zero limit for the right namespace": { + inputYAML: ` +ruler_max_rules_per_rule_group: 100 + +ruler_max_rules_per_rule_group_by_namespace: + mynamespace: 0 +`, + expectedLimit: 0, + expectedNamespace: "mynamespace", + }, + "other namespaces are not affected": { + inputYAML: ` +ruler_max_rules_per_rule_group: 100 + +ruler_max_rules_per_rule_group_by_namespace: + mynamespace: 10 +`, + expectedLimit: 100, + expectedNamespace: "othernamespace", + }, + } + + for name, tt := range tc { + t.Run(name, func(t *testing.T) { + limitsYAML := Limits{} + require.NoError(t, yaml.Unmarshal([]byte(tt.inputYAML), &limitsYAML)) + + ov, err := NewOverrides(limitsYAML, nil) + require.NoError(t, err) + + require.Equal(t, tt.expectedLimit, ov.RulerMaxRulesPerRuleGroup("user", tt.expectedNamespace)) + }) + } +} + +func TestRulerMaxRulesPerRuleGroupLimitsOverrides(t *testing.T) { + baseYaml := ` +ruler_max_rules_per_rule_group: 5 + +ruler_max_rules_per_rule_group_by_namespace: + mynamespace: 10 +` + + overrideGenericLimitsOnly := ` +testuser: + ruler_max_rules_per_rule_group: 333 +` + + overrideNamespaceLimits := ` +testuser: + ruler_max_rules_per_rule_group_by_namespace: + mynamespace: 7777 +` + + overrideGenericLimitsAndNamespaceLimits := ` +testuser: + ruler_max_rules_per_rule_group: 333 + + ruler_max_rules_per_rule_group_by_namespace: + mynamespace: 7777 +` + + differentUserOverride := ` +differentuser: + ruler_max_rules_per_rule_group_by_namespace: + mynamespace: 500 +` + + tc := map[string]struct { + overrides string + inputNamespace string + expectedLimit int + }{ + "no overrides, mynamespace": { + inputNamespace: "mynamespace", + expectedLimit: 10, + }, + "no overrides, othernamespace": { + inputNamespace: "othernamespace", + expectedLimit: 5, + }, + "generic override, mynamespace": { + inputNamespace: "mynamespace", + overrides: overrideGenericLimitsOnly, + expectedLimit: 10, + }, + "generic override, othernamespace": { + inputNamespace: "othernamespace", + overrides: overrideGenericLimitsOnly, + expectedLimit: 333, + }, + "namespace limit override, mynamespace": { + inputNamespace: "mynamespace", + overrides: overrideNamespaceLimits, + expectedLimit: 7777, + }, + "namespace limit override, othernamespace": { + inputNamespace: "othernamespace", + overrides: overrideNamespaceLimits, + expectedLimit: 5, + }, + "generic and namespace limit override, mynamespace": { + inputNamespace: "mynamespace", + overrides: overrideGenericLimitsAndNamespaceLimits, + expectedLimit: 7777, + }, + "generic and namespace limit override, othernamespace": { + inputNamespace: "othernamespace", + overrides: overrideGenericLimitsAndNamespaceLimits, + expectedLimit: 333, + }, + "different user override, mynamespace": { + inputNamespace: "mynamespace", + overrides: differentUserOverride, + expectedLimit: 10, + }, + "different user override, othernamespace": { + inputNamespace: "othernamespace", + overrides: differentUserOverride, + expectedLimit: 5, + }, + } + + for name, tt := range tc { + t.Run(name, func(t *testing.T) { + + t.Cleanup(func() { + SetDefaultLimitsForYAMLUnmarshalling(getDefaultLimits()) + }) + + SetDefaultLimitsForYAMLUnmarshalling(getDefaultLimits()) + + var limitsYAML Limits + err := yaml.Unmarshal([]byte(baseYaml), &limitsYAML) + require.NoError(t, err) + + SetDefaultLimitsForYAMLUnmarshalling(limitsYAML) + + overrides := map[string]*Limits{} + err = yaml.Unmarshal([]byte(tt.overrides), &overrides) + require.NoError(t, err) + + tl := NewMockTenantLimits(overrides) + ov, err := NewOverrides(limitsYAML, tl) + require.NoError(t, err) + + require.Equal(t, tt.expectedLimit, ov.RulerMaxRulesPerRuleGroup("testuser", tt.inputNamespace)) + }) + } +} + +func TestRulerMaxRuleGroupsPerTenantLimits(t *testing.T) { + tc := map[string]struct { + inputYAML string + expectedLimit int + expectedNamespace string + }{ + "no namespace specific limit": { + inputYAML: ` +ruler_max_rule_groups_per_tenant: 200 +`, + expectedLimit: 200, + expectedNamespace: "mynamespace", + }, + "zero limit for the right namespace": { + inputYAML: ` +ruler_max_rule_groups_per_tenant: 200 + +ruler_max_rule_groups_per_tenant_by_namespace: + mynamespace: 1 +`, + expectedLimit: 1, + expectedNamespace: "mynamespace", + }, + "other namespaces are not affected": { + inputYAML: ` +ruler_max_rule_groups_per_tenant: 200 + +ruler_max_rule_groups_per_tenant_by_namespace: + mynamespace: 20 +`, + expectedLimit: 200, + expectedNamespace: "othernamespace", + }, + } + + for name, tt := range tc { + t.Run(name, func(t *testing.T) { + limitsYAML := Limits{} + require.NoError(t, yaml.Unmarshal([]byte(tt.inputYAML), &limitsYAML)) + + ov, err := NewOverrides(limitsYAML, nil) + require.NoError(t, err) + + require.Equal(t, tt.expectedLimit, ov.RulerMaxRuleGroupsPerTenant("user", tt.expectedNamespace)) + }) + } +} + +func TestRulerMaxRuleGroupsPerTenantLimitsOverrides(t *testing.T) { + baseYaml := ` +ruler_max_rule_groups_per_tenant: 20 + +ruler_max_rule_groups_per_tenant_by_namespace: + mynamespace: 20 +` + + overrideGenericLimitsOnly := ` +testuser: + ruler_max_rule_groups_per_tenant: 444 +` + + overrideNamespaceLimits := ` +testuser: + ruler_max_rule_groups_per_tenant_by_namespace: + mynamespace: 8888 +` + + overrideGenericLimitsAndNamespaceLimits := ` +testuser: + ruler_max_rule_groups_per_tenant: 444 + + ruler_max_rule_groups_per_tenant_by_namespace: + mynamespace: 8888 +` + + differentUserOverride := ` +differentuser: + ruler_max_rule_groups_per_tenant_by_namespace: + mynamespace: 600 +` + + tc := map[string]struct { + overrides string + inputNamespace string + expectedLimit int + }{ + "no overrides, mynamespace": { + inputNamespace: "mynamespace", + expectedLimit: 20, + }, + "no overrides, othernamespace": { + inputNamespace: "othernamespace", + expectedLimit: 20, + }, + "generic override, mynamespace": { + inputNamespace: "mynamespace", + overrides: overrideGenericLimitsOnly, + expectedLimit: 20, + }, + "generic override, othernamespace": { + inputNamespace: "othernamespace", + overrides: overrideGenericLimitsOnly, + expectedLimit: 444, + }, + "namespace limit override, mynamespace": { + inputNamespace: "mynamespace", + overrides: overrideNamespaceLimits, + expectedLimit: 8888, + }, + "namespace limit override, othernamespace": { + inputNamespace: "othernamespace", + overrides: overrideNamespaceLimits, + expectedLimit: 20, + }, + "generic and namespace limit override, mynamespace": { + inputNamespace: "mynamespace", + overrides: overrideGenericLimitsAndNamespaceLimits, + expectedLimit: 8888, + }, + "generic and namespace limit override, othernamespace": { + inputNamespace: "othernamespace", + overrides: overrideGenericLimitsAndNamespaceLimits, + expectedLimit: 444, + }, + "different user override, mynamespace": { + inputNamespace: "mynamespace", + overrides: differentUserOverride, + expectedLimit: 20, + }, + "different user override, othernamespace": { + inputNamespace: "othernamespace", + overrides: differentUserOverride, + expectedLimit: 20, + }, + } + + for name, tt := range tc { + t.Run(name, func(t *testing.T) { + + t.Cleanup(func() { + SetDefaultLimitsForYAMLUnmarshalling(getDefaultLimits()) + }) + + SetDefaultLimitsForYAMLUnmarshalling(getDefaultLimits()) + + var limitsYAML Limits + err := yaml.Unmarshal([]byte(baseYaml), &limitsYAML) + require.NoError(t, err) + + SetDefaultLimitsForYAMLUnmarshalling(limitsYAML) + + overrides := map[string]*Limits{} + err = yaml.Unmarshal([]byte(tt.overrides), &overrides) + require.NoError(t, err) + + tl := NewMockTenantLimits(overrides) + ov, err := NewOverrides(limitsYAML, tl) + require.NoError(t, err) + + require.Equal(t, tt.expectedLimit, ov.RulerMaxRuleGroupsPerTenant("testuser", tt.inputNamespace)) + }) + } +} + func TestCustomTrackerConfigDeserialize(t *testing.T) { expectedConfig, err := activeseries.NewCustomTrackersConfig(map[string]string{"baz": `{foo="bar"}`}) require.NoError(t, err, "creating expected config") diff --git a/pkg/util/validation/notifications_limit_flag.go b/pkg/util/validation/notifications_limit_flag.go index b8ae22f1049..5b31ae1e829 100644 --- a/pkg/util/validation/notifications_limit_flag.go +++ b/pkg/util/validation/notifications_limit_flag.go @@ -6,57 +6,24 @@ package validation import ( - "encoding/json" - "fmt" - "github.com/pkg/errors" - "gopkg.in/yaml.v3" "github.com/grafana/mimir/pkg/util" ) -var allowedIntegrationNames = []string{ - "webhook", "email", "pagerduty", "opsgenie", "wechat", "slack", "victorops", "pushover", "sns", "webex", "telegram", "discord", "msteams", -} - -type NotificationRateLimitMap map[string]float64 - -// String implements flag.Value -func (m NotificationRateLimitMap) String() string { - out, err := json.Marshal(map[string]float64(m)) - if err != nil { - return fmt.Sprintf("failed to marshal: %v", err) +func validateIntegrationLimit(k string, _ float64) error { + if !util.StringsContain(allowedIntegrationNames, k) { + return errors.Errorf("unknown integration name: %s", k) } - return string(out) -} - -// Set implements flag.Value -func (m NotificationRateLimitMap) Set(s string) error { - newMap := map[string]float64{} - return m.updateMap(json.Unmarshal([]byte(s), &newMap), newMap) -} - -// UnmarshalYAML implements yaml.Unmarshaler. -func (m NotificationRateLimitMap) UnmarshalYAML(value *yaml.Node) error { - newMap := map[string]float64{} - return m.updateMap(value.DecodeWithOptions(newMap, yaml.DecodeOptions{KnownFields: true}), newMap) + return nil } -func (m NotificationRateLimitMap) updateMap(unmarshalErr error, newMap map[string]float64) error { - if unmarshalErr != nil { - return unmarshalErr - } - - for k, v := range newMap { - if !util.StringsContain(allowedIntegrationNames, k) { - return errors.Errorf("unknown integration name: %s", k) - } - m[k] = v - } - return nil +// allowedIntegrationNames is a list of all the integrations that can be rate limited. +var allowedIntegrationNames = []string{ + "webhook", "email", "pagerduty", "opsgenie", "wechat", "slack", "victorops", "pushover", "sns", "webex", "telegram", "discord", "msteams", } -// MarshalYAML implements yaml.Marshaler. -func (m NotificationRateLimitMap) MarshalYAML() (interface{}, error) { - return map[string]float64(m), nil +// NotificationRateLimitMap returns a map that can be used as a flag for setting notification rate limits. +func NotificationRateLimitMap() LimitsMap[float64] { + return NewLimitsMap[float64](validateIntegrationLimit) } diff --git a/pkg/util/validation/notifications_limit_flag_test.go b/pkg/util/validation/notifications_limit_flag_test.go index a0b53e24b47..689a3d52067 100644 --- a/pkg/util/validation/notifications_limit_flag_test.go +++ b/pkg/util/validation/notifications_limit_flag_test.go @@ -8,6 +8,7 @@ package validation import ( "bytes" "flag" + "maps" "testing" "github.com/stretchr/testify/assert" @@ -18,13 +19,16 @@ import ( func TestNotificationLimitsMap(t *testing.T) { for name, tc := range map[string]struct { args []string - expected NotificationRateLimitMap + expected LimitsMap[float64] error string }{ "basic test": { args: []string{"-map-flag", "{\"email\": 100 }"}, - expected: NotificationRateLimitMap{ - "email": 100, + expected: LimitsMap[float64]{ + validator: validateIntegrationLimit, + data: map[string]float64{ + "email": 100, + }, }, }, @@ -39,7 +43,7 @@ func TestNotificationLimitsMap(t *testing.T) { }, } { t.Run(name, func(t *testing.T) { - v := NotificationRateLimitMap{} + v := NotificationRateLimitMap() fs := flag.NewFlagSet("test", flag.ContinueOnError) fs.SetOutput(&bytes.Buffer{}) // otherwise errors would go to stderr. @@ -51,20 +55,20 @@ func TestNotificationLimitsMap(t *testing.T) { assert.Equal(t, tc.error, err.Error()) } else { assert.NoError(t, err) - assert.Equal(t, tc.expected, v) + assert.True(t, maps.Equal(tc.expected.data, v.data)) } }) } } type TestStruct struct { - Flag NotificationRateLimitMap `yaml:"flag"` + Flag LimitsMap[float64] `yaml:"flag"` } func TestNotificationsLimitMapYaml(t *testing.T) { var testStruct TestStruct - testStruct.Flag = map[string]float64{} + testStruct.Flag = NotificationRateLimitMap() require.NoError(t, testStruct.Flag.Set("{\"email\": 500 }")) expected := []byte(`flag: @@ -76,16 +80,16 @@ func TestNotificationsLimitMapYaml(t *testing.T) { assert.Equal(t, expected, actual) var actualStruct TestStruct - actualStruct.Flag = NotificationRateLimitMap{} // must be set, otherwise unmarshalling panics. + actualStruct.Flag = NotificationRateLimitMap() err = yaml.Unmarshal(expected, &actualStruct) require.NoError(t, err) - assert.Equal(t, testStruct, actualStruct) + assert.Equal(t, testStruct.Flag.data, actualStruct.Flag.data) } func TestUnknownIntegrationWhenLoadingYaml(t *testing.T) { var s TestStruct - s.Flag = NotificationRateLimitMap{} // must be set, otherwise unmarshalling panics. + s.Flag = NotificationRateLimitMap() yamlInput := `flag: unknown_integration: 500 @@ -98,7 +102,7 @@ func TestUnknownIntegrationWhenLoadingYaml(t *testing.T) { func TestWrongYamlStructureWhenLoadingYaml(t *testing.T) { var s TestStruct - s.Flag = NotificationRateLimitMap{} // must be set, otherwise unmarshalling panics. + s.Flag = NotificationRateLimitMap() yamlInput := `flag: email: diff --git a/tools/doc-generator/parse/parser.go b/tools/doc-generator/parse/parser.go index c1c78cb5fe1..364907041a5 100644 --- a/tools/doc-generator/parse/parser.go +++ b/tools/doc-generator/parse/parser.go @@ -343,6 +343,10 @@ func getFieldName(field reflect.StructField) string { func getFieldCustomType(t reflect.Type) (string, bool) { // Handle custom data types used in the config switch t.String() { + case reflect.TypeOf(validation.LimitsMap[float64]{}).String(): + return "map of string to float64", true + case reflect.TypeOf(validation.LimitsMap[int]{}).String(): + return "map of string to int", true case reflect.TypeOf(&url.URL{}).String(): return "url", true case reflect.TypeOf(time.Duration(0)).String(): @@ -425,6 +429,10 @@ func getFieldType(t reflect.Type) (string, error) { func getCustomFieldType(t reflect.Type) (string, bool) { // Handle custom data types used in the config switch t.String() { + case reflect.TypeOf(validation.LimitsMap[float64]{}).String(): + return "map of string to float64", true + case reflect.TypeOf(validation.LimitsMap[int]{}).String(): + return "map of string to int", true case reflect.TypeOf(&url.URL{}).String(): return "url", true case reflect.TypeOf(time.Duration(0)).String(): @@ -471,7 +479,9 @@ func ReflectType(typ string) reflect.Type { case "blocked_queries_config...": return reflect.TypeOf([]*validation.BlockedQuery{}) case "map of string to float64": - return reflect.TypeOf(map[string]float64{}) + return reflect.TypeOf(validation.LimitsMap[float64]{}) + case "map of string to int": + return reflect.TypeOf(validation.LimitsMap[int]{}) case "list of durations": return reflect.TypeOf(tsdb.DurationList{}) default: diff --git a/vendor/github.com/prometheus/prometheus/discovery/manager.go b/vendor/github.com/prometheus/prometheus/discovery/manager.go index f14071af309..897d7d151cf 100644 --- a/vendor/github.com/prometheus/prometheus/discovery/manager.go +++ b/vendor/github.com/prometheus/prometheus/discovery/manager.go @@ -120,6 +120,16 @@ func Name(n string) func(*Manager) { } } +// Updatert sets the updatert of the manager. +// Used to speed up tests. +func Updatert(u time.Duration) func(*Manager) { + return func(m *Manager) { + m.mtx.Lock() + defer m.mtx.Unlock() + m.updatert = u + } +} + // HTTPClientOptions sets the list of HTTP client options to expose to // Discoverers. It is up to Discoverers to choose to use the options provided. func HTTPClientOptions(opts ...config.HTTPClientOption) func(*Manager) { diff --git a/vendor/github.com/prometheus/prometheus/model/labels/labels_common.go b/vendor/github.com/prometheus/prometheus/model/labels/labels_common.go index f46321c97e7..4bc94f84fe5 100644 --- a/vendor/github.com/prometheus/prometheus/model/labels/labels_common.go +++ b/vendor/github.com/prometheus/prometheus/model/labels/labels_common.go @@ -18,6 +18,7 @@ import ( "encoding/json" "slices" "strconv" + "unsafe" "github.com/prometheus/common/model" ) @@ -215,3 +216,7 @@ func contains(s []Label, n string) bool { } return false } + +func yoloString(b []byte) string { + return *((*string)(unsafe.Pointer(&b))) +} diff --git a/vendor/github.com/prometheus/prometheus/model/labels/labels_dedupelabels.go b/vendor/github.com/prometheus/prometheus/model/labels/labels_dedupelabels.go index dfc74aa3a3d..972f5dc164e 100644 --- a/vendor/github.com/prometheus/prometheus/model/labels/labels_dedupelabels.go +++ b/vendor/github.com/prometheus/prometheus/model/labels/labels_dedupelabels.go @@ -20,7 +20,6 @@ import ( "slices" "strings" "sync" - "unsafe" "github.com/cespare/xxhash/v2" ) @@ -426,10 +425,6 @@ func EmptyLabels() Labels { return Labels{} } -func yoloString(b []byte) string { - return *((*string)(unsafe.Pointer(&b))) -} - // New returns a sorted Labels from the given labels. // The caller has to guarantee that all label names are unique. // Note this function is not efficient; should not be used in performance-critical places. diff --git a/vendor/github.com/prometheus/prometheus/model/labels/labels_stringlabels.go b/vendor/github.com/prometheus/prometheus/model/labels/labels_stringlabels.go index 9ef764daecb..bccceb61fe1 100644 --- a/vendor/github.com/prometheus/prometheus/model/labels/labels_stringlabels.go +++ b/vendor/github.com/prometheus/prometheus/model/labels/labels_stringlabels.go @@ -299,11 +299,6 @@ func Equal(ls, o Labels) bool { func EmptyLabels() Labels { return Labels{} } - -func yoloString(b []byte) string { - return *((*string)(unsafe.Pointer(&b))) -} - func yoloBytes(s string) (b []byte) { *(*string)(unsafe.Pointer(&b)) = s (*reflect.SliceHeader)(unsafe.Pointer(&b)).Cap = len(s) diff --git a/vendor/github.com/prometheus/prometheus/model/labels/matcher.go b/vendor/github.com/prometheus/prometheus/model/labels/matcher.go index 8e220e392d8..a09c838e3f8 100644 --- a/vendor/github.com/prometheus/prometheus/model/labels/matcher.go +++ b/vendor/github.com/prometheus/prometheus/model/labels/matcher.go @@ -101,7 +101,7 @@ func (m *Matcher) shouldQuoteName() bool { } return true } - return false + return len(m.Name) == 0 } // Matches returns whether the matcher matches the given string value. diff --git a/vendor/github.com/prometheus/prometheus/model/labels/regexp.go b/vendor/github.com/prometheus/prometheus/model/labels/regexp.go index f8323deb85c..562cac2e1ce 100644 --- a/vendor/github.com/prometheus/prometheus/model/labels/regexp.go +++ b/vendor/github.com/prometheus/prometheus/model/labels/regexp.go @@ -844,39 +844,23 @@ func (m *equalMultiStringMapMatcher) Matches(s string) bool { // toNormalisedLower normalise the input string using "Unicode Normalization Form D" and then convert // it to lower case. func toNormalisedLower(s string) string { - // Check if the string is all ASCII chars and convert any upper case character to lower case character. - isASCII := true - var ( - b strings.Builder - pos int - ) - b.Grow(len(s)) + var buf []byte for i := 0; i < len(s); i++ { c := s[i] - if isASCII && c >= utf8.RuneSelf { - isASCII = false - break + if c >= utf8.RuneSelf { + return strings.Map(unicode.ToLower, norm.NFKD.String(s)) } if 'A' <= c && c <= 'Z' { - c += 'a' - 'A' - if pos < i { - b.WriteString(s[pos:i]) + if buf == nil { + buf = []byte(s) } - b.WriteByte(c) - pos = i + 1 + buf[i] = c + 'a' - 'A' } } - if pos < len(s) { - b.WriteString(s[pos:]) + if buf == nil { + return s } - - // Optimize for ASCII-only strings. In this case we don't have to do any normalization. - if isASCII { - return b.String() - } - - // Normalise and convert to lower. - return strings.Map(unicode.ToLower, norm.NFKD.String(b.String())) + return yoloString(buf) } // anyStringWithoutNewlineMatcher is a stringMatcher which matches any string diff --git a/vendor/github.com/prometheus/prometheus/notifier/notifier.go b/vendor/github.com/prometheus/prometheus/notifier/notifier.go index 4cf376aa05f..eb83c45b075 100644 --- a/vendor/github.com/prometheus/prometheus/notifier/notifier.go +++ b/vendor/github.com/prometheus/prometheus/notifier/notifier.go @@ -298,25 +298,14 @@ func (n *Manager) nextBatch() []*Alert { return alerts } -// Run dispatches notifications continuously. -func (n *Manager) Run(tsets <-chan map[string][]*targetgroup.Group) { +// sendLoop continuously consumes the notifications queue and sends alerts to +// the configured Alertmanagers. +func (n *Manager) sendLoop() { for { - // The select is split in two parts, such as we will first try to read - // new alertmanager targets if they are available, before sending new - // alerts. select { case <-n.ctx.Done(): return - case ts := <-tsets: - n.reload(ts) - default: - select { - case <-n.ctx.Done(): - return - case ts := <-tsets: - n.reload(ts) - case <-n.more: - } + case <-n.more: } alerts := n.nextBatch() @@ -330,6 +319,21 @@ func (n *Manager) Run(tsets <-chan map[string][]*targetgroup.Group) { } } +// Run receives updates of target groups and triggers a reload. +// The dispatching of notifications occurs in the background to prevent blocking the receipt of target updates. +// Refer to https://github.com/prometheus/prometheus/issues/13676 for more details. +func (n *Manager) Run(tsets <-chan map[string][]*targetgroup.Group) { + go n.sendLoop() + for { + select { + case <-n.ctx.Done(): + return + case ts := <-tsets: + n.reload(ts) + } + } +} + func (n *Manager) reload(tgs map[string][]*targetgroup.Group) { n.mtx.Lock() defer n.mtx.Unlock() @@ -471,10 +475,6 @@ func (n *Manager) sendAll(alerts ...*Alert) bool { numSuccess atomic.Uint64 ) for _, ams := range amSets { - if len(ams.ams) == 0 { - continue - } - var ( payload []byte err error @@ -483,6 +483,11 @@ func (n *Manager) sendAll(alerts ...*Alert) bool { ams.mtx.RLock() + if len(ams.ams) == 0 { + ams.mtx.RUnlock() + continue + } + if len(ams.cfg.AlertRelabelConfigs) > 0 { amAlerts = relabelAlerts(ams.cfg.AlertRelabelConfigs, labels.Labels{}, alerts) if len(amAlerts) == 0 { diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/functions.test b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/functions.test index 2c198374acb..7e741e9956f 100644 --- a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/functions.test +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/functions.test @@ -1213,3 +1213,11 @@ eval instant at 5m log10(exp_root_log - 20) {l="y"} -Inf clear + +# Test that timestamp() handles the scenario where there are more steps than samples. +load 1m + metric 0+1x1000 + +# We expect the value to be 0 for t=0s to t=59s (inclusive), then 60 for t=60s and t=61s. +eval range from 0 to 61s step 1s timestamp(metric) + {} 0x59 60 60 diff --git a/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/range_queries.test b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/range_queries.test new file mode 100644 index 00000000000..e6951096026 --- /dev/null +++ b/vendor/github.com/prometheus/prometheus/promql/promqltest/testdata/range_queries.test @@ -0,0 +1,73 @@ +# sum_over_time with all values +load 30s + bar 0 1 10 100 1000 + +eval range from 0 to 2m step 1m sum_over_time(bar[30s]) + {} 0 11 1100 + +clear + +# sum_over_time with trailing values +load 30s + bar 0 1 10 100 1000 0 0 0 0 + +eval range from 0 to 2m step 1m sum_over_time(bar[30s]) + {} 0 11 1100 + +clear + +# sum_over_time with all values long +load 30s + bar 0 1 10 100 1000 10000 100000 1000000 10000000 + +eval range from 0 to 4m step 1m sum_over_time(bar[30s]) + {} 0 11 1100 110000 11000000 + +clear + +# sum_over_time with all values random +load 30s + bar 5 17 42 2 7 905 51 + +eval range from 0 to 3m step 1m sum_over_time(bar[30s]) + {} 5 59 9 956 + +clear + +# metric query +load 30s + metric 1+1x4 + +eval range from 0 to 2m step 1m metric + metric 1 3 5 + +clear + +# metric query with trailing values +load 30s + metric 1+1x8 + +eval range from 0 to 2m step 1m metric + metric 1 3 5 + +clear + +# short-circuit +load 30s + foo{job="1"} 1+1x4 + bar{job="2"} 1+1x4 + +eval range from 0 to 2m step 1m foo > 2 or bar + foo{job="1"} _ 3 5 + bar{job="2"} 1 3 5 + +clear + +# Drop metric name +load 30s + requests{job="1", __address__="bar"} 100 + +eval range from 0 to 2m step 1m requests * 2 + {job="1", __address__="bar"} 200 200 200 + +clear diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/codec.go b/vendor/github.com/prometheus/prometheus/storage/remote/codec.go index 1228b23f5c5..8c569ff0388 100644 --- a/vendor/github.com/prometheus/prometheus/storage/remote/codec.go +++ b/vendor/github.com/prometheus/prometheus/storage/remote/codec.go @@ -95,7 +95,7 @@ func EncodeReadResponse(resp *prompb.ReadResponse, w http.ResponseWriter) error // ToQuery builds a Query proto. func ToQuery(from, to int64, matchers []*labels.Matcher, hints *storage.SelectHints) (*prompb.Query, error) { - ms, err := toLabelMatchers(matchers) + ms, err := ToLabelMatchers(matchers) if err != nil { return nil, err } @@ -166,7 +166,7 @@ func ToQueryResult(ss storage.SeriesSet, sampleLimit int) (*prompb.QueryResult, } resp.Timeseries = append(resp.Timeseries, &prompb.TimeSeries{ - Labels: labelsToLabelsProto(series.Labels(), nil), + Labels: LabelsToLabelsProto(series.Labels(), nil), Samples: samples, Histograms: histograms, }) @@ -182,7 +182,7 @@ func FromQueryResult(sortSeries bool, res *prompb.QueryResult) storage.SeriesSet if err := validateLabelsAndMetricName(ts.Labels); err != nil { return errSeriesSet{err: err} } - lbls := labelProtosToLabels(&b, ts.Labels) + lbls := LabelProtosToLabels(&b, ts.Labels) series = append(series, &concreteSeries{labels: lbls, floats: ts.Samples, histograms: ts.Histograms}) } @@ -235,7 +235,7 @@ func StreamChunkedReadResponses( for ss.Next() { series := ss.At() iter = series.Iterator(iter) - lbls = MergeLabels(labelsToLabelsProto(series.Labels(), lbls), sortedExternalLabels) + lbls = MergeLabels(LabelsToLabelsProto(series.Labels(), lbls), sortedExternalLabels) maxDataLength := maxBytesInFrame for _, lbl := range lbls { @@ -566,7 +566,8 @@ func validateLabelsAndMetricName(ls []prompb.Label) error { return nil } -func toLabelMatchers(matchers []*labels.Matcher) ([]*prompb.LabelMatcher, error) { +// ToLabelMatchers converts Prometheus label matchers to protobuf label matchers. +func ToLabelMatchers(matchers []*labels.Matcher) ([]*prompb.LabelMatcher, error) { pbMatchers := make([]*prompb.LabelMatcher, 0, len(matchers)) for _, m := range matchers { var mType prompb.LabelMatcher_Type @@ -591,7 +592,7 @@ func toLabelMatchers(matchers []*labels.Matcher) ([]*prompb.LabelMatcher, error) return pbMatchers, nil } -// FromLabelMatchers parses protobuf label matchers to Prometheus label matchers. +// FromLabelMatchers converts protobuf label matchers to Prometheus label matchers. func FromLabelMatchers(matchers []*prompb.LabelMatcher) ([]*labels.Matcher, error) { result := make([]*labels.Matcher, 0, len(matchers)) for _, matcher := range matchers { @@ -621,7 +622,7 @@ func exemplarProtoToExemplar(b *labels.ScratchBuilder, ep prompb.Exemplar) exemp timestamp := ep.Timestamp return exemplar.Exemplar{ - Labels: labelProtosToLabels(b, ep.Labels), + Labels: LabelProtosToLabels(b, ep.Labels), Value: ep.Value, Ts: timestamp, HasTs: timestamp != 0, @@ -761,7 +762,9 @@ func LabelProtosToMetric(labelPairs []*prompb.Label) model.Metric { return metric } -func labelProtosToLabels(b *labels.ScratchBuilder, labelPairs []prompb.Label) labels.Labels { +// LabelProtosToLabels transforms prompb labels into labels. The labels builder +// will be used to build the returned labels. +func LabelProtosToLabels(b *labels.ScratchBuilder, labelPairs []prompb.Label) labels.Labels { b.Reset() for _, l := range labelPairs { b.Add(l.Name, l.Value) @@ -770,9 +773,9 @@ func labelProtosToLabels(b *labels.ScratchBuilder, labelPairs []prompb.Label) la return b.Labels() } -// labelsToLabelsProto transforms labels into prompb labels. The buffer slice +// LabelsToLabelsProto transforms labels into prompb labels. The buffer slice // will be used to avoid allocations if it is big enough to store the labels. -func labelsToLabelsProto(lbls labels.Labels, buf []prompb.Label) []prompb.Label { +func LabelsToLabelsProto(lbls labels.Labels, buf []prompb.Label) []prompb.Label { result := buf[:0] lbls.Range(func(l labels.Label) { result = append(result, prompb.Label{ diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/queue_manager.go b/vendor/github.com/prometheus/prometheus/storage/remote/queue_manager.go index 01d2db06a5c..b244b331b0c 100644 --- a/vendor/github.com/prometheus/prometheus/storage/remote/queue_manager.go +++ b/vendor/github.com/prometheus/prometheus/storage/remote/queue_manager.go @@ -1507,7 +1507,7 @@ func (s *shards) populateTimeSeries(batch []timeSeries, pendingData []prompb.Tim // Number of pending samples is limited by the fact that sendSamples (via sendSamplesWithBackoff) // retries endlessly, so once we reach max samples, if we can never send to the endpoint we'll // stop reading from the queue. This makes it safe to reference pendingSamples by index. - pendingData[nPending].Labels = labelsToLabelsProto(d.seriesLabels, pendingData[nPending].Labels) + pendingData[nPending].Labels = LabelsToLabelsProto(d.seriesLabels, pendingData[nPending].Labels) switch d.sType { case tSample: pendingData[nPending].Samples = append(pendingData[nPending].Samples, prompb.Sample{ @@ -1517,7 +1517,7 @@ func (s *shards) populateTimeSeries(batch []timeSeries, pendingData []prompb.Tim nPendingSamples++ case tExemplar: pendingData[nPending].Exemplars = append(pendingData[nPending].Exemplars, prompb.Exemplar{ - Labels: labelsToLabelsProto(d.exemplarLabels, nil), + Labels: LabelsToLabelsProto(d.exemplarLabels, nil), Value: d.value, Timestamp: d.timestamp, }) diff --git a/vendor/github.com/prometheus/prometheus/storage/remote/write_handler.go b/vendor/github.com/prometheus/prometheus/storage/remote/write_handler.go index ff227292b8a..e7515a42b88 100644 --- a/vendor/github.com/prometheus/prometheus/storage/remote/write_handler.go +++ b/vendor/github.com/prometheus/prometheus/storage/remote/write_handler.go @@ -116,7 +116,7 @@ func (h *writeHandler) write(ctx context.Context, req *prompb.WriteRequest) (err b := labels.NewScratchBuilder(0) var exemplarErr error for _, ts := range req.Timeseries { - labels := labelProtosToLabels(&b, ts.Labels) + labels := LabelProtosToLabels(&b, ts.Labels) if !labels.IsValid() { level.Warn(h.logger).Log("msg", "Invalid metric names or labels", "got", labels.String()) samplesWithInvalidLabels++ diff --git a/vendor/github.com/prometheus/prometheus/tsdb/head.go b/vendor/github.com/prometheus/prometheus/tsdb/head.go index f84a0c29cef..62bb3ce92fd 100644 --- a/vendor/github.com/prometheus/prometheus/tsdb/head.go +++ b/vendor/github.com/prometheus/prometheus/tsdb/head.go @@ -1597,7 +1597,7 @@ func (h *Head) gc() (actualInOrderMint, minOOOTime int64, minMmapFile int) { // Drop old chunks and remember series IDs and hashes if they can be // deleted entirely. - deleted, chunksRemoved, actualInOrderMint, minOOOTime, minMmapFile := h.series.gc(mint, minOOOMmapRef) + deleted, affected, chunksRemoved, actualInOrderMint, minOOOTime, minMmapFile := h.series.gc(mint, minOOOMmapRef) seriesRemoved := len(deleted) h.metrics.seriesRemoved.Add(float64(seriesRemoved)) @@ -1606,7 +1606,7 @@ func (h *Head) gc() (actualInOrderMint, minOOOTime int64, minMmapFile int) { h.numSeries.Sub(uint64(seriesRemoved)) // Remove deleted series IDs from the postings lists. - h.postings.Delete(deleted) + h.postings.Delete(deleted, affected) // Remove tombstones referring to the deleted series. h.tombstones.DeleteTombstones(deleted) @@ -1920,9 +1920,10 @@ func newStripeSeries(stripeSize int, seriesCallback SeriesLifecycleCallback) *st // but the returned map goes into postings.Delete() which expects a map[storage.SeriesRef]struct // and there's no easy way to cast maps. // minMmapFile is the min mmap file number seen in the series (in-order and out-of-order) after gc'ing the series. -func (s *stripeSeries) gc(mint int64, minOOOMmapRef chunks.ChunkDiskMapperRef) (_ map[storage.SeriesRef]struct{}, _ int, _, _ int64, minMmapFile int) { +func (s *stripeSeries) gc(mint int64, minOOOMmapRef chunks.ChunkDiskMapperRef) (_ map[storage.SeriesRef]struct{}, _ map[labels.Label]struct{}, _ int, _, _ int64, minMmapFile int) { var ( deleted = map[storage.SeriesRef]struct{}{} + affected = map[labels.Label]struct{}{} rmChunks = 0 actualMint int64 = math.MaxInt64 minOOOTime int64 = math.MaxInt64 @@ -1978,6 +1979,7 @@ func (s *stripeSeries) gc(mint int64, minOOOMmapRef chunks.ChunkDiskMapperRef) ( } deleted[storage.SeriesRef(series.ref)] = struct{}{} + series.lset.Range(func(l labels.Label) { affected[l] = struct{}{} }) s.hashes[hashShard].del(hash, series.ref) delete(s.series[refShard], series.ref) deletedForCallback[series.ref] = series.lset @@ -1989,7 +1991,7 @@ func (s *stripeSeries) gc(mint int64, minOOOMmapRef chunks.ChunkDiskMapperRef) ( actualMint = mint } - return deleted, rmChunks, actualMint, minOOOTime, minMmapFile + return deleted, affected, rmChunks, actualMint, minOOOTime, minMmapFile } // The iterForDeletion function iterates through all series, invoking the checkDeletedFunc for each. diff --git a/vendor/github.com/prometheus/prometheus/tsdb/index/postings.go b/vendor/github.com/prometheus/prometheus/tsdb/index/postings.go index e1032ff12c2..a5f222f123c 100644 --- a/vendor/github.com/prometheus/prometheus/tsdb/index/postings.go +++ b/vendor/github.com/prometheus/prometheus/tsdb/index/postings.go @@ -288,89 +288,34 @@ func (p *MemPostings) EnsureOrder(numberOfConcurrentProcesses int) { } // Delete removes all ids in the given map from the postings lists. -func (p *MemPostings) Delete(deleted map[storage.SeriesRef]struct{}) { - // We will take an optimistic read lock for the entire method, - // and only lock for writing when we actually find something to delete. - // - // Each SeriesRef can appear in several Postings. - // To change each one, we need to know the label name and value that it is indexed under. - // We iterate over all label names, then for each name all values, - // and look for individual series to be deleted. - p.mtx.RLock() - defer p.mtx.RUnlock() - - // Collect all keys relevant for deletion once. New keys added afterwards - // can by definition not be affected by any of the given deletes. - keys := make([]string, 0, len(p.m)) - maxVals := 0 - for n := range p.m { - keys = append(keys, n) - if len(p.m[n]) > maxVals { - maxVals = len(p.m[n]) - } - } - - vals := make([]string, 0, maxVals) - for _, n := range keys { - // Copy the values and iterate the copy: if we unlock in the loop below, - // another goroutine might modify the map while we are part-way through it. - vals = vals[:0] - for v := range p.m[n] { - vals = append(vals, v) - } - - // For each posting we first analyse whether the postings list is affected by the deletes. - // If no, we remove the label value from the vals list. - // This way we only need to Lock once later. - for i := 0; i < len(vals); { - found := false - refs := p.m[n][vals[i]] - for _, id := range refs { - if _, ok := deleted[id]; ok { - i++ - found = true - break - } - } +// affectedLabels contains all the labels that are affected by the deletion, there's no need to check other labels. +func (p *MemPostings) Delete(deleted map[storage.SeriesRef]struct{}, affected map[labels.Label]struct{}) { + p.mtx.Lock() + defer p.mtx.Unlock() - if !found { - // Didn't match, bring the last value to this position, make the slice shorter and check again. - // The order of the slice doesn't matter as it comes from a map iteration. - vals[i], vals = vals[len(vals)-1], vals[:len(vals)-1] + process := func(l labels.Label) { + orig := p.m[l.Name][l.Value] + repl := make([]storage.SeriesRef, 0, len(orig)) + for _, id := range orig { + if _, ok := deleted[id]; !ok { + repl = append(repl, id) } } - - // If no label values have deleted ids, just continue. - if len(vals) == 0 { - continue - } - - // The only vals left here are the ones that contain deleted ids. - // Now we take the write lock and remove the ids. - p.mtx.RUnlock() - p.mtx.Lock() - for _, l := range vals { - repl := make([]storage.SeriesRef, 0, len(p.m[n][l])) - - for _, id := range p.m[n][l] { - if _, ok := deleted[id]; !ok { - repl = append(repl, id) - } - } - if len(repl) > 0 { - p.m[n][l] = repl - } else { - delete(p.m[n], l) + if len(repl) > 0 { + p.m[l.Name][l.Value] = repl + } else { + delete(p.m[l.Name], l.Value) + // Delete the key if we removed all values. + if len(p.m[l.Name]) == 0 { + delete(p.m, l.Name) } } + } - // Delete the key if we removed all values. - if len(p.m[n]) == 0 { - delete(p.m, n) - } - p.mtx.Unlock() - p.mtx.RLock() + for l := range affected { + process(l) } + process(allPostingsKey) } // Iter calls f for each postings list. It aborts if f returns an error and returns it. diff --git a/vendor/modules.txt b/vendor/modules.txt index d860a4915cc..c86bf59735b 100644 --- a/vendor/modules.txt +++ b/vendor/modules.txt @@ -973,7 +973,7 @@ github.com/prometheus/exporter-toolkit/web github.com/prometheus/procfs github.com/prometheus/procfs/internal/fs github.com/prometheus/procfs/internal/util -# github.com/prometheus/prometheus v1.99.0 => github.com/grafana/mimir-prometheus v0.0.0-20240618115521-86ae072cdc80 +# github.com/prometheus/prometheus v1.99.0 => github.com/grafana/mimir-prometheus v0.0.0-20240620082736-3d8577bc0dfb ## explicit; go 1.21 github.com/prometheus/prometheus/config github.com/prometheus/prometheus/discovery @@ -1609,7 +1609,7 @@ sigs.k8s.io/kustomize/kyaml/yaml/walk sigs.k8s.io/yaml sigs.k8s.io/yaml/goyaml.v2 sigs.k8s.io/yaml/goyaml.v3 -# github.com/prometheus/prometheus => github.com/grafana/mimir-prometheus v0.0.0-20240618115521-86ae072cdc80 +# github.com/prometheus/prometheus => github.com/grafana/mimir-prometheus v0.0.0-20240620082736-3d8577bc0dfb # github.com/hashicorp/memberlist => github.com/grafana/memberlist v0.3.1-0.20220714140823-09ffed8adbbe # gopkg.in/yaml.v3 => github.com/colega/go-yaml-yaml v0.0.0-20220720105220-255a8d16d094 # github.com/grafana/regexp => github.com/grafana/regexp v0.0.0-20240531075221-3685f1377d7b