-
Notifications
You must be signed in to change notification settings - Fork 524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update module github.com/twmb/franz-go/plugin/kotel to v1.5.0 (main) #9379
Merged
aknuds1
merged 1 commit into
main
from
deps-update/main-github.com-twmb-franz-go-plugin-kotel-1.x
Sep 23, 2024
Merged
Update module github.com/twmb/franz-go/plugin/kotel to v1.5.0 (main) #9379
aknuds1
merged 1 commit into
main
from
deps-update/main-github.com-twmb-franz-go-plugin-kotel-1.x
Sep 23, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ℹ Artifact update noticeFile name: go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
|
aknuds1
approved these changes
Sep 23, 2024
renovate
bot
force-pushed
the
deps-update/main-github.com-twmb-franz-go-plugin-kotel-1.x
branch
from
September 23, 2024 08:46
e0a5e79
to
2e73021
Compare
aknuds1
deleted the
deps-update/main-github.com-twmb-franz-go-plugin-kotel-1.x
branch
September 23, 2024 09:18
dimitarvdimitrov
added a commit
that referenced
this pull request
Sep 27, 2024
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Use labels hasher Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Use consistent title name Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Use consistent title name Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> kafka replay speed: adjust batchingQueueCapacity (#9344) * kafka replay speed: adjust batchingQueueCapacity I made 2000 up when we were flushing individual series to the channel. Then 2000 might have made sense, but when flushing whole WriteRequests a capacity of 1 should be sufficient. Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Increase errCh capacity Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Explain why +1 Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Set capacity to 5 Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Update pkg/storage/ingest/pusher.go Co-authored-by: gotjosh <josue.abreu@gmail.com> * Improve test Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Update pkg/storage/ingest/pusher.go --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Co-authored-by: gotjosh <josue.abreu@gmail.com> kafka replay speed: rename CLI flags (#9345) * kafka replay speed: rename CLI flags Make them a bit more consistent on what they mean and add better descriptions. Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Clarify metrics Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Rename flags Co-authored-by: gotjosh <josue.abreu@gmail.com> * Update docs Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Co-authored-by: gotjosh <josue.abreu@gmail.com> kafka replay speed: add support for metadata & source (#9287) * kafka replay speed: add support for metadata & source Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Remove completed TODO Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Use a single map Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Make tests compile again Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> kafka replay speed: improve fetching tracing (#9361) * Better span logging Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Better span logging Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Try to have more buffering in ordered batches maybe waiting to send to ordered batches comes with too much overhead Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Correct local docker-compose config with new flags Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Maybe have more stable events Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Revert "Try to have more buffering in ordered batches" This reverts commit 886b159. * Maybe have more stable events Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Maybe have more stable events Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Propagate loggers in spans Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> continuous-test: Make the User-Agent header for the Mimir client conf… (#9338) * continuous-test: Make the User-Agent header for the Mimir client configurable * Update CHANGELOG.md * Run make reference-help TestIngester_PushToStorage_CircuitBreaker: increase initial delay (#9351) * TestIngester_PushToStorage_CircuitBreaker: increase initial delay Fixes XXX I believe there's a race between sending the first request and then collecting the metrics. It's possible that we collect the metrics longer than 200ms after the first request, at which point the CB has opened. I could reproduce XXX by reducing the initialDelay to 10ms. This PR increases it to 1 hour so that we're more sure that the delay hasn't expired when we're collecting the metrics. Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Adjust all tests Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Update to latest commit of dskit main (#9356) Specifically pulls in grafana/dskit#585 Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com> Update mimir-prometheus (#9358) * Update mimir-prometheus * Run make generate-otlp query-tee: add equivalent errors for string expression for range queries (#9366) * query-tee: add equivalent errors for string expression for range queries * Add changelog entry MQE: fix `rate()` over native histograms where first point in range is a counter reset (#9371) * MQE: fix `rate()` over native histograms where first point is a counter reset * Add changelog entry Update module github.com/Azure/azure-sdk-for-go/sdk/storage/azblob to v1.4.1 (#9369) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Use centralized 'Add to docs project' workflow with GitHub App auth (#9330) * Use centralized 'Add to docs project' workflow with GitHub App auth Until this is merged, it is likely that any issues labeled `type/docs` won't be added to the [organization project](https://github.com/orgs/grafana/projects/69). The underlying action is centralized so that any future changes are made in one place (`grafana/writers-toolkit`). The action is versioned to protect workflows from breaking changes. The action uses Vault secrets instead of the discouraged organization secrets. The workflow uses a consistent name so that future changes can be made programmatically. Relates to https://github.com/orgs/grafana/projects/279/views/9?pane=issue&itemId=44280262 Signed-off-by: Jack Baldry <jack.baldry@grafana.com> * Remove unneeded checkout step * Remove unneeded checkout step --------- Signed-off-by: Jack Baldry <jack.baldry@grafana.com> Update grafana/agent Docker tag to v0.43.1 (#9365) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Update module github.com/hashicorp/vault/api/auth/userpass to v0.8.0 (#9375) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Update module github.com/hashicorp/vault/api/auth/approle to v0.8.0 (#9374) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Update module go.opentelemetry.io/collector/pdata to v1.15.0 (#9380) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Update module github.com/hashicorp/vault/api/auth/kubernetes to v0.8.0 (#9377) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Update module github.com/twmb/franz-go/plugin/kotel to v1.5.0 (#9379) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> kafka replay speed: ingestion metrics (#9346) * kafka replay speed: ingestion metrics Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Separate batch processing time by batch contents Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Also set time on metadata Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Add tenant to metrics Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Add metrics for errors Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Rename batching queue metrics Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Pairing to address code review Co-Authored-By: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Move the metrics into their own file Co-Authored-By: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * go mod tidy Signed-off-by: gotjosh <josue.abreu@gmail.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Signed-off-by: gotjosh <josue.abreu@gmail.com> Co-authored-by: gotjosh <josue.abreu@gmail.com> kafka replay speed: move error handling closer to actual ingestion (#9349) * kafka replay speed: move error handling closer to actual ingestion Previously, we'd let error bubble-up and only take decisions on whether to abort the request or not at the very top (`pusherConsumer`). This meant that we'd potentially buffer more requests before we detect an error. This change extracts error handling logic into a `Pusher` implementation: `clientErrorFilteringPusher`. This implementation logs client errors and then swallows them. We inject that implementation in front of the ingester. This means that the parallel storage implementation can abort ASAP instead of collecting and bubbling up the errors. Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Signed-off-by: gotjosh <josue.abreu@gmail.com> Co-authored-by: gotjosh <josue.abreu@gmail.com> kafka replay speed: concurrency fetching improvements (#9389) * fetched records include timestamps Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * try with defaultMinBytesWaitTime=3s Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * add fetch_min_bytes_max_wait Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Don't block on sending to the channel Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Remove wait for when we're fetching from the end Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Fix bug with blocking on fetch Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Slightly easier to follow lifecycle of previousResult Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Correct merging of results Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Avoid double-logging events Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Revert "add fetch_min_bytes_max_wait" This reverts commit 6197d4b. * Increase MinBytesWaitTime to 5s Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Add comment about warpstream and MinBytes Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Address review comments Signed-off-by: gotjosh <josue.abreu@gmail.com> * Add tests for concurrentFetchers Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Fix bugs in tracking lastReturnedRecord Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Renamed method Signed-off-by: gotjosh <josue.abreu@gmail.com> * use the older context Signed-off-by: gotjosh <josue.abreu@gmail.com> * Name variable correct variable name Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Reduce MaxWaitTime in PartitionReader tests Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Change test createConcurrentFetchers signature Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Sort imports Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Signed-off-by: gotjosh <josue.abreu@gmail.com> Co-authored-by: gotjosh <josue.abreu@gmail.com> Make concurrentFetchers change its concurrency dynamically (#9437) * Make concurrentFetchers change its concurrency dynamically Signed-off-by: gotjosh <josue.abreu@gmail.com> * address review comments Signed-off-by: gotjosh <josue.abreu@gmail.com> * `make doc` Signed-off-by: gotjosh <josue.abreu@gmail.com> * inline the stop method Signed-off-by: gotjosh <josue.abreu@gmail.com> * Fix panic when creating concurrent fetchers fails Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Disabled by default Signed-off-by: gotjosh <josue.abreu@gmail.com> * we don't need to handle the context in start Signed-off-by: gotjosh <josue.abreu@gmail.com> * don't store concurrency or records per fetch Signed-off-by: gotjosh <josue.abreu@gmail.com> * add validation to the flags Signed-off-by: gotjosh <josue.abreu@gmail.com> * Ensure we don't leak any goroutines. Signed-off-by: gotjosh <josue.abreu@gmail.com> * remove concurrent and recordsperfetch from the main struct Signed-off-by: gotjosh <josue.abreu@gmail.com> --------- Signed-off-by: gotjosh <josue.abreu@gmail.com> Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Co-authored-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> kafka replay speed: fix concurrent fetching concurrency transition (#9447) * kafka replay speed: fix concurrent fetching concurrency transition Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Update pkg/storage/ingest/reader.go * Make sure we evaluate r.lastReturnedRecord WHEN we return Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Redistribute wg.Add Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Add tests Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Remove defer causing data race Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Move defer to a different place Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * WIP Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Give more time to catch up with target_lag Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Clarify comment Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
dimitarvdimitrov
added a commit
that referenced
this pull request
Sep 27, 2024
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Use labels hasher Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Use consistent title name Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Use consistent title name Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> kafka replay speed: adjust batchingQueueCapacity (#9344) * kafka replay speed: adjust batchingQueueCapacity I made 2000 up when we were flushing individual series to the channel. Then 2000 might have made sense, but when flushing whole WriteRequests a capacity of 1 should be sufficient. Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Increase errCh capacity Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Explain why +1 Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Set capacity to 5 Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Update pkg/storage/ingest/pusher.go Co-authored-by: gotjosh <josue.abreu@gmail.com> * Improve test Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Update pkg/storage/ingest/pusher.go --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Co-authored-by: gotjosh <josue.abreu@gmail.com> kafka replay speed: rename CLI flags (#9345) * kafka replay speed: rename CLI flags Make them a bit more consistent on what they mean and add better descriptions. Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Clarify metrics Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Rename flags Co-authored-by: gotjosh <josue.abreu@gmail.com> * Update docs Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Co-authored-by: gotjosh <josue.abreu@gmail.com> kafka replay speed: add support for metadata & source (#9287) * kafka replay speed: add support for metadata & source Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Remove completed TODO Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Use a single map Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Make tests compile again Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> kafka replay speed: improve fetching tracing (#9361) * Better span logging Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Better span logging Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Try to have more buffering in ordered batches maybe waiting to send to ordered batches comes with too much overhead Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Correct local docker-compose config with new flags Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Maybe have more stable events Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Revert "Try to have more buffering in ordered batches" This reverts commit 886b159. * Maybe have more stable events Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Maybe have more stable events Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Propagate loggers in spans Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> continuous-test: Make the User-Agent header for the Mimir client conf… (#9338) * continuous-test: Make the User-Agent header for the Mimir client configurable * Update CHANGELOG.md * Run make reference-help TestIngester_PushToStorage_CircuitBreaker: increase initial delay (#9351) * TestIngester_PushToStorage_CircuitBreaker: increase initial delay Fixes XXX I believe there's a race between sending the first request and then collecting the metrics. It's possible that we collect the metrics longer than 200ms after the first request, at which point the CB has opened. I could reproduce XXX by reducing the initialDelay to 10ms. This PR increases it to 1 hour so that we're more sure that the delay hasn't expired when we're collecting the metrics. Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Adjust all tests Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Update to latest commit of dskit main (#9356) Specifically pulls in grafana/dskit#585 Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com> Update mimir-prometheus (#9358) * Update mimir-prometheus * Run make generate-otlp query-tee: add equivalent errors for string expression for range queries (#9366) * query-tee: add equivalent errors for string expression for range queries * Add changelog entry MQE: fix `rate()` over native histograms where first point in range is a counter reset (#9371) * MQE: fix `rate()` over native histograms where first point is a counter reset * Add changelog entry Update module github.com/Azure/azure-sdk-for-go/sdk/storage/azblob to v1.4.1 (#9369) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Use centralized 'Add to docs project' workflow with GitHub App auth (#9330) * Use centralized 'Add to docs project' workflow with GitHub App auth Until this is merged, it is likely that any issues labeled `type/docs` won't be added to the [organization project](https://github.com/orgs/grafana/projects/69). The underlying action is centralized so that any future changes are made in one place (`grafana/writers-toolkit`). The action is versioned to protect workflows from breaking changes. The action uses Vault secrets instead of the discouraged organization secrets. The workflow uses a consistent name so that future changes can be made programmatically. Relates to https://github.com/orgs/grafana/projects/279/views/9?pane=issue&itemId=44280262 Signed-off-by: Jack Baldry <jack.baldry@grafana.com> * Remove unneeded checkout step * Remove unneeded checkout step --------- Signed-off-by: Jack Baldry <jack.baldry@grafana.com> Update grafana/agent Docker tag to v0.43.1 (#9365) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Update module github.com/hashicorp/vault/api/auth/userpass to v0.8.0 (#9375) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Update module github.com/hashicorp/vault/api/auth/approle to v0.8.0 (#9374) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Update module go.opentelemetry.io/collector/pdata to v1.15.0 (#9380) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Update module github.com/hashicorp/vault/api/auth/kubernetes to v0.8.0 (#9377) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Update module github.com/twmb/franz-go/plugin/kotel to v1.5.0 (#9379) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> kafka replay speed: ingestion metrics (#9346) * kafka replay speed: ingestion metrics Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Separate batch processing time by batch contents Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Also set time on metadata Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Add tenant to metrics Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Add metrics for errors Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Rename batching queue metrics Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Pairing to address code review Co-Authored-By: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Move the metrics into their own file Co-Authored-By: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * go mod tidy Signed-off-by: gotjosh <josue.abreu@gmail.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Signed-off-by: gotjosh <josue.abreu@gmail.com> Co-authored-by: gotjosh <josue.abreu@gmail.com> kafka replay speed: move error handling closer to actual ingestion (#9349) * kafka replay speed: move error handling closer to actual ingestion Previously, we'd let error bubble-up and only take decisions on whether to abort the request or not at the very top (`pusherConsumer`). This meant that we'd potentially buffer more requests before we detect an error. This change extracts error handling logic into a `Pusher` implementation: `clientErrorFilteringPusher`. This implementation logs client errors and then swallows them. We inject that implementation in front of the ingester. This means that the parallel storage implementation can abort ASAP instead of collecting and bubbling up the errors. Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Signed-off-by: gotjosh <josue.abreu@gmail.com> Co-authored-by: gotjosh <josue.abreu@gmail.com> kafka replay speed: concurrency fetching improvements (#9389) * fetched records include timestamps Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * try with defaultMinBytesWaitTime=3s Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * add fetch_min_bytes_max_wait Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Don't block on sending to the channel Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Remove wait for when we're fetching from the end Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Fix bug with blocking on fetch Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Slightly easier to follow lifecycle of previousResult Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Correct merging of results Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Avoid double-logging events Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Revert "add fetch_min_bytes_max_wait" This reverts commit 6197d4b. * Increase MinBytesWaitTime to 5s Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Add comment about warpstream and MinBytes Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Address review comments Signed-off-by: gotjosh <josue.abreu@gmail.com> * Add tests for concurrentFetchers Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Fix bugs in tracking lastReturnedRecord Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Renamed method Signed-off-by: gotjosh <josue.abreu@gmail.com> * use the older context Signed-off-by: gotjosh <josue.abreu@gmail.com> * Name variable correct variable name Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Reduce MaxWaitTime in PartitionReader tests Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Change test createConcurrentFetchers signature Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Sort imports Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Signed-off-by: gotjosh <josue.abreu@gmail.com> Co-authored-by: gotjosh <josue.abreu@gmail.com> Make concurrentFetchers change its concurrency dynamically (#9437) * Make concurrentFetchers change its concurrency dynamically Signed-off-by: gotjosh <josue.abreu@gmail.com> * address review comments Signed-off-by: gotjosh <josue.abreu@gmail.com> * `make doc` Signed-off-by: gotjosh <josue.abreu@gmail.com> * inline the stop method Signed-off-by: gotjosh <josue.abreu@gmail.com> * Fix panic when creating concurrent fetchers fails Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Disabled by default Signed-off-by: gotjosh <josue.abreu@gmail.com> * we don't need to handle the context in start Signed-off-by: gotjosh <josue.abreu@gmail.com> * don't store concurrency or records per fetch Signed-off-by: gotjosh <josue.abreu@gmail.com> * add validation to the flags Signed-off-by: gotjosh <josue.abreu@gmail.com> * Ensure we don't leak any goroutines. Signed-off-by: gotjosh <josue.abreu@gmail.com> * remove concurrent and recordsperfetch from the main struct Signed-off-by: gotjosh <josue.abreu@gmail.com> --------- Signed-off-by: gotjosh <josue.abreu@gmail.com> Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> Co-authored-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> kafka replay speed: fix concurrent fetching concurrency transition (#9447) * kafka replay speed: fix concurrent fetching concurrency transition Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Update pkg/storage/ingest/reader.go * Make sure we evaluate r.lastReturnedRecord WHEN we return Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Redistribute wg.Add Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Add tests Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Remove defer causing data race Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Move defer to a different place Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * WIP Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Give more time to catch up with target_lag Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Clarify comment Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v1.4.1
->v1.5.0
Release Notes
twmb/franz-go (github.com/twmb/franz-go/plugin/kotel)
v1.5.0
Compare Source
===
This release adds a few new APIs, has a few small behavior changes, and has one
"breaking" change.
Breaking changes
The
kerberos
package is now a dedicated separate module. Rather thanrequiring a major version bump, since this fix is entirely at the module level
for an almost entirely unused package, I figured it is okayish to technically
break compatibility for the few usages of this package, when the fix can be
done entirely when
go get
ing.The gokrb5 library, basically the only
library in the Go ecosystem that implements Kerberos, has a slightly broken
license. Organizations that
are sensitive to this were required to not use franz-go even if they did not
use Kerberos because franz-go pulls in a dependency on gokrb5.
Now, with
kerberos
being a distinct and separate module, depending onfranz-go only will not cause an indirect dependency on gokrb5.
If your upgrade is broken by this change, run:
Behavior changes
UnknownTopicRetries
now allows -1 to signal disabling the option (meaningunlimited retries, rather than no retries). This follows the convention of
other options where -1 disables limits.
Improvements
Waiting for unknown topics while producing now takes into account both the
produce context and aborting. Previously, the record context was only taken
into account after a topic was loaded. The same is true for aborting buffered
records: previously, abort would hang until a topic was loaded.
New APIs are added to kmsg to deprecate the previous
Into
functions. TheInto
functions still exist and will not be removed until kadm is stabilized(see #141).
Features
ConsumeResetOffset
is now clearer, you can now useNoResetOffset
withstart or end or exact offsets, and there is now the very useful
Offset.AfterMilli
function. Previously,NoResetOffset
only allowed startingconsuming at the start and it was not obvious why. We keep the previous
default-to-start behavior, but we now allow modifying it. As well,
AfterMilli
can be used to largely replace
AtEnd
. Odds are, you want to consume allrecords after your program starts even if new partitions are added to a
topic. Previously, if you added a partition to a topic,
AtEnd
would missrecords that were produced until the client refreshed metadata and discovered
the partition. Because of this, you were safer using
AtStart
, but thisunnecessarily forced you to consume everything on program start.
Custom group balancers can now return errors, you can now intercept commits
to attach metadata, and you can now intercept offset fetches to read
metadata. Previously, none of this was possible. I considered metadata a bit
of a niche feature, but accessing it (as well as returning errors when
balancing) is required if you want to implement streams. New APIs now exist to
support the more advanced behavior:
PreCommitFnContext
,OnOffsetsFetched
,and
GroupMemberBalancerOrError
. As well,BalancePlan.AsMemberIDMap
nowexists to provide access to a plan's underlying plan map. This did not exist
previously because I wanted to keep the type opaque for potential future
changes, but the odds of this are low and we can attempt forward compatibility
when the time arises.
RecordReader
now supports regular expressions for text values.Relevant commits
a2cbbf8
go.{mod,sum}: go get -u ./...; go mod tidyce7a84f
kerberos: split into dedicated module, p1e8e5c82
and744a60e
kgo: improve ConsumeResetOffset, NoResetOffset, add Offset.AfterMilli78fff0f
ande8e5117
andb457742
: add GroupMemberBalancerOrErrorb5256c7
kadm: fix long standing poor API (Into fns)8148c55
BalancePlan: add AsMemberIDMap113a2c0
add OnOffsetsFetched function to allow inspecting commit metadata0a4f2ec
andcba9e26
kgo: add PreCommitFnContext, enabling pre-commit interceptors for metadata42e5b57
producer: allow a canceled context & aborting to quit unknown wait96d647a
UnknownTopicRetries: allow -1 to disable the option001c6d3
RecordReader: support regular expressions for text valuesConfiguration
📅 Schedule: Branch creation - "before 9am on Monday" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.