Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tpcc: modify join query so it's supported by opt #27721

Merged
merged 1 commit into from
Jul 19, 2018

Conversation

RaduBerinde
Copy link
Member

Minor rewrite of a TPCC query so it's supported by the optimizer.
This is the query which requires lookup join and the optimizer does
choose that plan.

Release note: None

I ran with --wait=false against a cluster. I also found an instance with SHOW QUERIES and ran EXPLAIN to make sure it's a plan with lookup-join.

Minor rewrite of a TPCC query so it's supported by the optimizer.
This is the query which requires lookup join and the optimizer does
choose that plan.

Release note: None
@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Collaborator

@petermattis petermattis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (and 1 stale)

Copy link
Contributor

@andy-kimball andy-kimball left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any concern we impacted TPCC perf with this alternate query form?

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained

Copy link
Collaborator

@petermattis petermattis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any concern we impacted TPCC perf with this alternate query form?

This is a rare query (something like <1% of the queries) and the switch from a single count-distinct operator to a distinct and then a count operator feels like it would get lost in the noise. That's a guess, though. Running tpcc-bench before and after wouldn't hurt. Or perhaps just let @nvanbenschoten know.

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained

@nvanbenschoten
Copy link
Member

Yes, this is a rare transaction so I don't expect it to alter perf. Still, I'd like to see a comparison between the old and the new query before making the change. You can run with --wait=false --mix=stockLevel=1 to isolate it.

@RaduBerinde
Copy link
Member Author

I will run that. But I assume I need to fist run the default "mix" for a while to populate the database with data? Or is there some already populated backup I can restore?

@RaduBerinde
Copy link
Member Author

I ran in various configurations on a local single-node. Not much of a difference.

Old query (optimizer on but not used):

_elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
      1s        0          187.0          187.0     37.7    121.6    159.4    302.0 stockLevel
      2s        0          198.0          192.5     29.4    142.6    201.3   1275.1 stockLevel
      3s        0          192.0          192.3     30.4    142.6    218.1   1140.9 stockLevel
      4s        0          192.0          192.2     29.4    142.6    285.2    453.0 stockLevel
      5s        0          200.0          193.8     27.3    151.0    234.9    738.2 stockLevel
      6s        0          197.0          194.3     29.4    159.4    234.9    520.1 stockLevel
      7s        0          201.0          195.3     29.4    130.0    302.0    906.0 stockLevel


Old (optimizer off, should be the same):

_elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
      1s        0          192.0          192.0     35.7    151.0    234.9    385.9 stockLevel
      2s        0          202.0          197.0     33.6    142.6    251.7    637.5 stockLevel
      3s        0          178.0          190.7     31.5    151.0    209.7    805.3 stockLevel
      4s        0          193.0          191.3     30.4    151.0    285.2   1275.1 stockLevel
      5s        0          195.0          192.0     30.4    151.0    226.5    671.1 stockLevel
      6s        0          196.9          192.8     31.5    159.4    402.7    604.0 stockLevel
      7s        0          197.1          193.4     30.4    151.0    243.3    570.4 stockLevel
      8s        0          193.0          193.4     33.6    130.0    285.2    352.3 stockLevel


New query (optimizer off):

_elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
      1s        0          189.0          189.0     30.4    142.6    234.9    453.0 stockLevel
      2s        0          197.0          193.0     33.6    134.2    369.1    570.4 stockLevel
      3s        0          192.0          192.7     29.4    151.0    243.3    503.3 stockLevel
      4s        0          199.0          194.2     25.2    151.0    226.5   1275.1 stockLevel
      5s        0          193.0          194.0     33.6    151.0    234.9    838.9 stockLevel
      6s        0          194.0          194.0     35.7    151.0    234.9    503.3 stockLevel
      7s        0          199.0          194.7     24.1    142.6    201.3    302.0 stockLevel
      8s        0          187.0          193.7     32.5    159.4    251.7   1879.0 stockLevel

New query (optimizer on):

_elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
      1s        0          192.0          192.0     30.4    117.4    218.1    234.9 stockLevel
      2s        0          196.0          194.0     28.3    176.2    285.2   1208.0 stockLevel
      3s        0          179.0          189.0     31.5    159.4    192.9    285.2 stockLevel
      4s        0          175.0          185.5     29.4    167.8    234.9   2013.3 stockLevel
      5s        0          183.0          185.0     30.4    167.8    302.0   1208.0 stockLevel
      6s        0          169.0          182.3     39.8    201.3    285.2    402.7 stockLevel
      7s        0          181.0          182.1     29.4    159.4    285.2    671.1 stockLevel
      8s        0          201.0          184.5     32.5    167.8    268.4    637.5 stockLevel

As a sanity check I turned off the optimizer and the experimental lookup flag and that is much slower:

New query (optimizer off, experimental lookup flag off):
_elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
      1s        0           12.0           12.0    260.0    906.0   1006.6   1006.6 stockLevel
      2s        0           12.0           12.0    335.5    906.0   1342.2   1342.2 stockLevel
      3s        0           12.0           12.0    419.4   2415.9   2818.6   2818.6 stockLevel
      4s        0           13.0           12.2    402.7   2080.4   3758.1   3758.1 stockLevel

Copy link
Member

@nvanbenschoten nvanbenschoten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm: thanks for verifying the impact of this.

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (and 1 stale)

@RaduBerinde
Copy link
Member Author

bors r+

craig bot pushed a commit that referenced this pull request Jul 19, 2018
27721: tpcc: modify join query so it's supported by opt r=RaduBerinde a=RaduBerinde

Minor rewrite of a TPCC query so it's supported by the optimizer.
This is the query which requires lookup join and the optimizer does
choose that plan.

Release note: None

I ran with `--wait=false` against a cluster. I also found an instance with `SHOW QUERIES` and ran `EXPLAIN` to make sure it's a plan with lookup-join.

Co-authored-by: Radu Berinde <radu@cockroachlabs.com>
@craig
Copy link
Contributor

craig bot commented Jul 19, 2018

Build succeeded

@craig craig bot merged commit 6957f81 into cockroachdb:master Jul 19, 2018
@RaduBerinde RaduBerinde deleted the tpcc-opt-update branch July 19, 2018 19:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants