Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a generic chunk pool for batch.NewChunkMergeIterator #5110

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 23 additions & 2 deletions pkg/querier/batch/batch.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ import (
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/model/histogram"
"github.com/prometheus/prometheus/tsdb/chunkenc"
"github.com/prometheus/prometheus/util/zeropool"

"github.com/grafana/mimir/pkg/storage/chunk"
)
Expand Down Expand Up @@ -55,17 +56,37 @@ type iterator interface {
Err() error
}

var genericChunkSlicePool zeropool.Pool[[]GenericChunk]

func getGenericChunkSlice(n int) []GenericChunk {
sn := genericChunkSlicePool.Get()
if sn == nil || cap(sn) < n {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would some kind of pool with different buckets for different ranges of n be better here, to avoid the pool only ever having large slices?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 in a multi tenant installation I assume you get all kinds of different size queries so you'll end up increasing the memory footprint over time by having more and more large slices in the pool. Maybe measure first with a histogram in some busy environment to get a good idea on buckets?

Note: I've checked that this is invoked after applying the max chunks per query limit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can maybe use SlabPool (code) which can pack together smaller slices into one big slice. We used to have bucketed pool, but it was unused and removed in #4996

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we use this code in the store-gateways. The way we size slabs is to account for the worst-case or p99 case of number of chunks per series. This is an example for slices of chunks.

So if a series can have at most 300 chunks, have a slab size of 300. If all series have fewer chunks, e.g. 3, then one slab can fit 100 series. The overhead isn't huge, but you also get reduced number of overall allocations and maybe even better CPU cache locality since sequential slices are sequential in memory.

// Even if we get a slice from the pool, we do not put it back here
// to not grow the pool size more than required. This effectively
// replaces a smaller slice with a bigger slice in the pool.
return make([]GenericChunk, n)
}
return sn[:n]
}

func putGenericChunkSlice(sn []GenericChunk) {
genericChunkSlicePool.Put(sn)
}

// NewChunkMergeIterator returns a chunkenc.Iterator that merges Mimir chunks together.
func NewChunkMergeIterator(it chunkenc.Iterator, chunks []chunk.Chunk, _, _ model.Time) chunkenc.Iterator {
converted := make([]GenericChunk, len(chunks))
converted := getGenericChunkSlice(len(chunks))
for i, c := range chunks {
converted[i] = NewGenericChunk(int64(c.From), int64(c.Through), c.Data.NewIterator)
}

return NewGenericChunkMergeIterator(it, converted)
i := NewGenericChunkMergeIterator(it, converted)
putGenericChunkSlice(converted)
return i
}

// NewGenericChunkMergeIterator returns a chunkenc.Iterator that merges generic chunks together.
// This function must not hold a reference to the 'chunks' slice.
func NewGenericChunkMergeIterator(it chunkenc.Iterator, chunks []GenericChunk) chunkenc.Iterator {
var iter *mergeIterator

Expand Down
2 changes: 2 additions & 0 deletions pkg/querier/batch/merge.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@ type mergeIterator struct {
currErr error
}

// newMergeIterator returns an iterator that merges generic chunks in batches.
// This functions must not hold a reference to the `cs` slice.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nit]

Suggested change
// This functions must not hold a reference to the `cs` slice.
// This function must not hold a reference to the `cs` slice.

func newMergeIterator(it iterator, cs []GenericChunk) *mergeIterator {
c, ok := it.(*mergeIterator)
if ok {
Expand Down