Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[aggregator] Sort heap in one go, instead of iterating one-by-one #3331

Merged
merged 2 commits into from
Mar 8, 2021

Conversation

vdarulis
Copy link
Collaborator

@vdarulis vdarulis commented Mar 6, 2021

What this PR does / why we need it:

A few more small timer aggregation tweaks.
Total throughput with 12 concurrent streams:

name                                                              old speed      new speed      delta
TimerAddBatch/100k_samples_1000_batch_size-12                      604MB/s ± 3%   633MB/s ± 0%   +4.84%  (p=0.008 n=5+5)
TimerAddBatch/100k_samples_1000_batch_size_0..1,000,000_range-12   555MB/s ± 5%   635MB/s ± 0%  +14.50%  (p=0.008 n=5+5)
TimerAddBatch/100k_samples_1000_batch_size_0..100,000_range-12     567MB/s ± 1%   584MB/s ± 5%     ~     (p=0.151 n=5+5)
TimerAddBatch/10k_samples_1000_batch_size-12                       543MB/s ± 2%   590MB/s ± 2%   +8.54%  (p=0.008 n=5+5)
TimerAddBatch/10k_samples_100_batch_size-12                        536MB/s ± 1%   583MB/s ± 2%   +8.83%  (p=0.008 n=5+5)
TimerAddBatch/10k_samples_100_batch_size_with_negative_values-12   537MB/s ± 1%   575MB/s ± 2%   +7.09%  (p=0.008 n=5+5)
TimerAddBatch/1k_samples_100_batch_size-12                         621MB/s ± 1%   652MB/s ± 2%   +4.92%  (p=0.008 n=5+5)

Single CPU only:

name                                                           old speed      new speed      delta
TimerAddBatch/100k_samples_1000_batch_size                     74.2MB/s ± 0%  80.0MB/s ± 0%  +7.74%  (p=0.008 n=5+5)
TimerAddBatch/100k_samples_1000_batch_size_0..1,000,000_range  74.3MB/s ± 0%  79.6MB/s ± 1%  +7.17%  (p=0.008 n=5+5)
TimerAddBatch/100k_samples_1000_batch_size_0..100,000_range    74.1MB/s ± 0%  79.9MB/s ± 1%  +7.87%  (p=0.008 n=5+5)
TimerAddBatch/10k_samples_1000_batch_size                      75.8MB/s ± 0%  80.9MB/s ± 1%  +6.78%  (p=0.008 n=5+5)
TimerAddBatch/10k_samples_100_batch_size                       76.0MB/s ± 1%  81.5MB/s ± 0%  +7.22%  (p=0.008 n=5+5)
TimerAddBatch/10k_samples_100_batch_size_with_negative_values  75.6MB/s ± 0%  81.2MB/s ± 1%  +7.42%  (p=0.008 n=5+5)
TimerAddBatch/1k_samples_100_batch_size                        91.7MB/s ± 0%  97.1MB/s ± 1%  +5.80%  (p=0.008 n=5+5)

Special notes for your reviewer:

Does this PR introduce a user-facing and/or backwards incompatible change?:


Does this PR require updating code package or user-facing documentation?:


@codecov
Copy link

codecov bot commented Mar 6, 2021

Codecov Report

Merging #3331 (e995ae4) into master (a03e55f) will increase coverage by 26.8%.
The diff coverage is 98.7%.

Impacted file tree graph

@@             Coverage Diff             @@
##           master    #3331       +/-   ##
===========================================
+ Coverage    45.6%    72.5%    +26.8%     
===========================================
  Files           4     1099     +1095     
  Lines         274   101668   +101394     
===========================================
+ Hits          125    73727    +73602     
- Misses        139    22864    +22725     
- Partials       10     5077     +5067     
Flag Coverage Δ
aggregator 76.6% <98.7%> (?)
cluster 84.9% <ø> (?)
collector 84.3% <ø> (+38.6%) ⬆️
dbnode 78.9% <ø> (?)
m3em 74.4% <ø> (?)
m3ninx 73.5% <ø> (?)
metrics 20.0% <ø> (?)
msg 74.2% <ø> (?)
query 67.4% <ø> (?)
x 80.4% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.


Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a03e55f...caf401f. Read the comment docs.

src/aggregator/aggregation/quantile/cm/stream.go Outdated Show resolved Hide resolved
src/aggregator/aggregation/quantile/cm/heap.go Outdated Show resolved Hide resolved
@vdarulis vdarulis merged commit a6607a6 into master Mar 8, 2021
@vdarulis vdarulis deleted the v/heap branch March 8, 2021 01:25
soundvibe added a commit that referenced this pull request Mar 9, 2021
* master: (22 commits)
  Remove deprecated fields (#3327)
  Add quotas to Permits (#3333)
  [aggregator] Drop messages that have a drop policy applied (#3341)
  Fix NPE due to race with a closing series (#3056)
  [coordinator] Apply auto-mapping rules if-and-only-if no drop policies are in effect (#3339)
  [aggregator] Add validation in AddTimedWithStagedMetadatas (#3338)
  [coordinator] Fix panic in Ready endpoint for admin coordinator (#3335)
  [instrument] Config option to emit detailed Go runtime metrics only (#3332)
  [aggregator] Sort heap in one go, instead of iterating one-by-one (#3331)
  [pool] Add support for dynamic, sync.Pool backed, object pools (#3334)
  Enable PANIC_ON_INVARIANT_VIOLATED for tests (#3326)
  [aggregator] CanLead for unflushed window takes BufferPast into account (#3328)
  Optimize StagedMetadatas conversion (#3330)
  [m3msg] Improve message scan performance (#3319)
  [dbnode] Add reason tag to bootstrap retries metric (#3317)
  [coordinator] Enable rule filtering on prom metric type (#3325)
  Update m3dbnode-all-config.yml (#3204)
  [coordinator] Include Type in RollupOp.Equal (#3322)
  [coordinator] Simplify iteration logic of matchRollupTarget (#3321)
  [coordinator] Add rollup type to remove specific dimensions (#3318)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants