-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark runner script #918
Benchmark runner script #918
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
had a couple of questions mostly, but otherwise this script looks good to me.
Main thing was should we include spark-submit-template.txt
?
build |
build |
1 similar comment
build |
--input-format parquet \ | ||
--output /path/to/output \ | ||
--output-format parquet \ | ||
--configs cpu gpu-ucx-on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing \ at end?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Fixed.
This benchmark script assumes that the following environment variables have been set for | ||
the location of the relevant JAR files to be used: | ||
|
||
- SPARK_RAPIDS_PLUGIN_JAR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reason we use env variables vs parameters to script? For script purposes parameters would be easier, I think its also more obvious to user and they don't accidentally get something unexpected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My reasoning was that we tell users to set up these environment variables in the getting started guide and I have been using this approach as a bit of a stop-gap solution for the reporting tools to show the plugin and cuDF versions that were used to run benchmarks. This isn't ideal and it would be better to use cuDF and plugin APIs to get the version numbers instead. I haven't looked into whether this is possible or not. I'll give this some more thought.
--query q4 q5 | ||
|
||
In this example, configuration key-value pairs will be loaded from cpu.properties and | ||
gpu-ucx-on.properties and appended to a spark-submit-template.txt to build the spark-submit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where does spark-submit-template.txt come from? ./? What if I have multiple of these and want to switch between them? what if I run from different directory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've made the template file configurable now.
Signed-off-by: Andy Grove <andygrove@nvidia.com>
Signed-off-by: Andy Grove <andygrove@nvidia.com>
Signed-off-by: Andy Grove <andygrove@nvidia.com>
Signed-off-by: Andy Grove <andygrove@nvidia.com>
Signed-off-by: Andy Grove <andygrove@nvidia.com>
…nt. Also make all arguments required. Signed-off-by: Andy Grove <andygrove@nvidia.com>
d3745a2
to
10713c5
Compare
Signed-off-by: Andy Grove <andygrove@nvidia.com>
--output /path/to/output \ | ||
--output-format parquet \ | ||
--configs cpu gpu-ucx-on \ | ||
--query q4 q5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add the --template option
Signed-off-by: Andy Grove <andygrove@nvidia.com>
build |
1 similar comment
build |
* Benchmark runner script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add argument for number of iterations Signed-off-by: Andy Grove <andygrove@nvidia.com> * Fix docs Signed-off-by: Andy Grove <andygrove@nvidia.com> * add license Signed-off-by: Andy Grove <andygrove@nvidia.com> * improve documentation for the configuration files Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add missing line-continuation symbol in example Signed-off-by: Andy Grove <andygrove@nvidia.com> * Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required. Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update benchmarking guide to link to the benchmark python script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add --template to example and fix markdown header Signed-off-by: Andy Grove <andygrove@nvidia.com>
* Add some more checks to databricks build scripts Signed-off-by: Thomas Graves <tgraves@nvidia.com> * remove extra newline * use the right -gt for bash * Add new python file for databricks cluster utils * Fix up scripts * databricks scripts working Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Pass in sshkey Signed-off-by: Thomas Graves <tgraves@nvidia.com> * cluster creation script mods * fix * fix pub key * fix missing quote * fix $ * update public key to be param Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Add public key value * clenaup Signed-off-by: Thomas Graves <tgraves@nvidia.com> * modify permissions Signed-off-by: Thomas Graves <tgraves@nvidia.com> * change loc cluster id file * fix extra / * quote public key * try different setting cluster id * debug * try again * try readfile * try again * try quotes * cleanup * Add option to control number of partitions when converting from CSV to Parquet (#915) * Add command-line arguments for applying coalesce and repartition on a per-table basis Signed-off-by: Andy Grove <andygrove@nvidia.com> * Move command-line validation logic and address other feedback Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update copyright years and fix import order Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update docs/benchmarks.md Co-authored-by: Jason Lowe <jlowe@nvidia.com> * Remove withPartitioning option from TPC-H and TPC-xBB file conversion Signed-off-by: Andy Grove <andygrove@nvidia.com> Co-authored-by: Jason Lowe <jlowe@nvidia.com> * Benchmark runner script (#918) * Benchmark runner script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add argument for number of iterations Signed-off-by: Andy Grove <andygrove@nvidia.com> * Fix docs Signed-off-by: Andy Grove <andygrove@nvidia.com> * add license Signed-off-by: Andy Grove <andygrove@nvidia.com> * improve documentation for the configuration files Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add missing line-continuation symbol in example Signed-off-by: Andy Grove <andygrove@nvidia.com> * Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required. Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update benchmarking guide to link to the benchmark python script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add --template to example and fix markdown header Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add legacy config to clear active Spark 3.1.0 session in tests (#970) Signed-off-by: Jason Lowe <jlowe@nvidia.com> * XFail tests until final fix can be put in (#968) Signed-off-by: Robert (Bobby) Evans <bobby@apache.org> * Stop reporting totalTime metric for GpuShuffleExchangeExec (#973) Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add some more checks to databricks build scripts Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Pass in sshkey * Add create script, add more parameters, etc Signed-off-by: Thomas Graves <tgraves@nvidia.com> * add create script * rework some scripts Signed-off-by: Thomas Graves <tgraves@nvidia.com> * fix is_cluster_running Signed-off-by: Thomas Graves <tgraves@nvidia.com> * put slack back in * update text * cleanup Signed-off-by: Thomas Graves <tgraves@nvidia.com> * remove datetime * send output to stderr Signed-off-by: Thomas Graves <tgraves@nvidia.com> Co-authored-by: Andy Grove <andygrove@users.noreply.github.com> Co-authored-by: Jason Lowe <jlowe@nvidia.com> Co-authored-by: Robert (Bobby) Evans <bobby@apache.org>
* Benchmark runner script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add argument for number of iterations Signed-off-by: Andy Grove <andygrove@nvidia.com> * Fix docs Signed-off-by: Andy Grove <andygrove@nvidia.com> * add license Signed-off-by: Andy Grove <andygrove@nvidia.com> * improve documentation for the configuration files Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add missing line-continuation symbol in example Signed-off-by: Andy Grove <andygrove@nvidia.com> * Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required. Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update benchmarking guide to link to the benchmark python script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add --template to example and fix markdown header Signed-off-by: Andy Grove <andygrove@nvidia.com>
* Add some more checks to databricks build scripts Signed-off-by: Thomas Graves <tgraves@nvidia.com> * remove extra newline * use the right -gt for bash * Add new python file for databricks cluster utils * Fix up scripts * databricks scripts working Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Pass in sshkey Signed-off-by: Thomas Graves <tgraves@nvidia.com> * cluster creation script mods * fix * fix pub key * fix missing quote * fix $ * update public key to be param Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Add public key value * clenaup Signed-off-by: Thomas Graves <tgraves@nvidia.com> * modify permissions Signed-off-by: Thomas Graves <tgraves@nvidia.com> * change loc cluster id file * fix extra / * quote public key * try different setting cluster id * debug * try again * try readfile * try again * try quotes * cleanup * Add option to control number of partitions when converting from CSV to Parquet (NVIDIA#915) * Add command-line arguments for applying coalesce and repartition on a per-table basis Signed-off-by: Andy Grove <andygrove@nvidia.com> * Move command-line validation logic and address other feedback Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update copyright years and fix import order Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update docs/benchmarks.md Co-authored-by: Jason Lowe <jlowe@nvidia.com> * Remove withPartitioning option from TPC-H and TPC-xBB file conversion Signed-off-by: Andy Grove <andygrove@nvidia.com> Co-authored-by: Jason Lowe <jlowe@nvidia.com> * Benchmark runner script (NVIDIA#918) * Benchmark runner script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add argument for number of iterations Signed-off-by: Andy Grove <andygrove@nvidia.com> * Fix docs Signed-off-by: Andy Grove <andygrove@nvidia.com> * add license Signed-off-by: Andy Grove <andygrove@nvidia.com> * improve documentation for the configuration files Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add missing line-continuation symbol in example Signed-off-by: Andy Grove <andygrove@nvidia.com> * Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required. Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update benchmarking guide to link to the benchmark python script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add --template to example and fix markdown header Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add legacy config to clear active Spark 3.1.0 session in tests (NVIDIA#970) Signed-off-by: Jason Lowe <jlowe@nvidia.com> * XFail tests until final fix can be put in (NVIDIA#968) Signed-off-by: Robert (Bobby) Evans <bobby@apache.org> * Stop reporting totalTime metric for GpuShuffleExchangeExec (NVIDIA#973) Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add some more checks to databricks build scripts Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Pass in sshkey * Add create script, add more parameters, etc Signed-off-by: Thomas Graves <tgraves@nvidia.com> * add create script * rework some scripts Signed-off-by: Thomas Graves <tgraves@nvidia.com> * fix is_cluster_running Signed-off-by: Thomas Graves <tgraves@nvidia.com> * put slack back in * update text * cleanup Signed-off-by: Thomas Graves <tgraves@nvidia.com> * remove datetime * send output to stderr Signed-off-by: Thomas Graves <tgraves@nvidia.com> Co-authored-by: Andy Grove <andygrove@users.noreply.github.com> Co-authored-by: Jason Lowe <jlowe@nvidia.com> Co-authored-by: Robert (Bobby) Evans <bobby@apache.org>
* Benchmark runner script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add argument for number of iterations Signed-off-by: Andy Grove <andygrove@nvidia.com> * Fix docs Signed-off-by: Andy Grove <andygrove@nvidia.com> * add license Signed-off-by: Andy Grove <andygrove@nvidia.com> * improve documentation for the configuration files Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add missing line-continuation symbol in example Signed-off-by: Andy Grove <andygrove@nvidia.com> * Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required. Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update benchmarking guide to link to the benchmark python script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add --template to example and fix markdown header Signed-off-by: Andy Grove <andygrove@nvidia.com>
* Add some more checks to databricks build scripts Signed-off-by: Thomas Graves <tgraves@nvidia.com> * remove extra newline * use the right -gt for bash * Add new python file for databricks cluster utils * Fix up scripts * databricks scripts working Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Pass in sshkey Signed-off-by: Thomas Graves <tgraves@nvidia.com> * cluster creation script mods * fix * fix pub key * fix missing quote * fix $ * update public key to be param Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Add public key value * clenaup Signed-off-by: Thomas Graves <tgraves@nvidia.com> * modify permissions Signed-off-by: Thomas Graves <tgraves@nvidia.com> * change loc cluster id file * fix extra / * quote public key * try different setting cluster id * debug * try again * try readfile * try again * try quotes * cleanup * Add option to control number of partitions when converting from CSV to Parquet (NVIDIA#915) * Add command-line arguments for applying coalesce and repartition on a per-table basis Signed-off-by: Andy Grove <andygrove@nvidia.com> * Move command-line validation logic and address other feedback Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update copyright years and fix import order Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update docs/benchmarks.md Co-authored-by: Jason Lowe <jlowe@nvidia.com> * Remove withPartitioning option from TPC-H and TPC-xBB file conversion Signed-off-by: Andy Grove <andygrove@nvidia.com> Co-authored-by: Jason Lowe <jlowe@nvidia.com> * Benchmark runner script (NVIDIA#918) * Benchmark runner script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add argument for number of iterations Signed-off-by: Andy Grove <andygrove@nvidia.com> * Fix docs Signed-off-by: Andy Grove <andygrove@nvidia.com> * add license Signed-off-by: Andy Grove <andygrove@nvidia.com> * improve documentation for the configuration files Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add missing line-continuation symbol in example Signed-off-by: Andy Grove <andygrove@nvidia.com> * Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required. Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update benchmarking guide to link to the benchmark python script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add --template to example and fix markdown header Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add legacy config to clear active Spark 3.1.0 session in tests (NVIDIA#970) Signed-off-by: Jason Lowe <jlowe@nvidia.com> * XFail tests until final fix can be put in (NVIDIA#968) Signed-off-by: Robert (Bobby) Evans <bobby@apache.org> * Stop reporting totalTime metric for GpuShuffleExchangeExec (NVIDIA#973) Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add some more checks to databricks build scripts Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Pass in sshkey * Add create script, add more parameters, etc Signed-off-by: Thomas Graves <tgraves@nvidia.com> * add create script * rework some scripts Signed-off-by: Thomas Graves <tgraves@nvidia.com> * fix is_cluster_running Signed-off-by: Thomas Graves <tgraves@nvidia.com> * put slack back in * update text * cleanup Signed-off-by: Thomas Graves <tgraves@nvidia.com> * remove datetime * send output to stderr Signed-off-by: Thomas Graves <tgraves@nvidia.com> Co-authored-by: Andy Grove <andygrove@users.noreply.github.com> Co-authored-by: Jason Lowe <jlowe@nvidia.com> Co-authored-by: Robert (Bobby) Evans <bobby@apache.org>
* Benchmark runner script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add argument for number of iterations Signed-off-by: Andy Grove <andygrove@nvidia.com> * Fix docs Signed-off-by: Andy Grove <andygrove@nvidia.com> * add license Signed-off-by: Andy Grove <andygrove@nvidia.com> * improve documentation for the configuration files Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add missing line-continuation symbol in example Signed-off-by: Andy Grove <andygrove@nvidia.com> * Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required. Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update benchmarking guide to link to the benchmark python script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add --template to example and fix markdown header Signed-off-by: Andy Grove <andygrove@nvidia.com>
* Add some more checks to databricks build scripts Signed-off-by: Thomas Graves <tgraves@nvidia.com> * remove extra newline * use the right -gt for bash * Add new python file for databricks cluster utils * Fix up scripts * databricks scripts working Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Pass in sshkey Signed-off-by: Thomas Graves <tgraves@nvidia.com> * cluster creation script mods * fix * fix pub key * fix missing quote * fix $ * update public key to be param Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Add public key value * clenaup Signed-off-by: Thomas Graves <tgraves@nvidia.com> * modify permissions Signed-off-by: Thomas Graves <tgraves@nvidia.com> * change loc cluster id file * fix extra / * quote public key * try different setting cluster id * debug * try again * try readfile * try again * try quotes * cleanup * Add option to control number of partitions when converting from CSV to Parquet (NVIDIA#915) * Add command-line arguments for applying coalesce and repartition on a per-table basis Signed-off-by: Andy Grove <andygrove@nvidia.com> * Move command-line validation logic and address other feedback Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update copyright years and fix import order Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update docs/benchmarks.md Co-authored-by: Jason Lowe <jlowe@nvidia.com> * Remove withPartitioning option from TPC-H and TPC-xBB file conversion Signed-off-by: Andy Grove <andygrove@nvidia.com> Co-authored-by: Jason Lowe <jlowe@nvidia.com> * Benchmark runner script (NVIDIA#918) * Benchmark runner script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add argument for number of iterations Signed-off-by: Andy Grove <andygrove@nvidia.com> * Fix docs Signed-off-by: Andy Grove <andygrove@nvidia.com> * add license Signed-off-by: Andy Grove <andygrove@nvidia.com> * improve documentation for the configuration files Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add missing line-continuation symbol in example Signed-off-by: Andy Grove <andygrove@nvidia.com> * Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required. Signed-off-by: Andy Grove <andygrove@nvidia.com> * Update benchmarking guide to link to the benchmark python script Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add --template to example and fix markdown header Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add legacy config to clear active Spark 3.1.0 session in tests (NVIDIA#970) Signed-off-by: Jason Lowe <jlowe@nvidia.com> * XFail tests until final fix can be put in (NVIDIA#968) Signed-off-by: Robert (Bobby) Evans <bobby@apache.org> * Stop reporting totalTime metric for GpuShuffleExchangeExec (NVIDIA#973) Signed-off-by: Andy Grove <andygrove@nvidia.com> * Add some more checks to databricks build scripts Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Pass in sshkey * Add create script, add more parameters, etc Signed-off-by: Thomas Graves <tgraves@nvidia.com> * add create script * rework some scripts Signed-off-by: Thomas Graves <tgraves@nvidia.com> * fix is_cluster_running Signed-off-by: Thomas Graves <tgraves@nvidia.com> * put slack back in * update text * cleanup Signed-off-by: Thomas Graves <tgraves@nvidia.com> * remove datetime * send output to stderr Signed-off-by: Thomas Graves <tgraves@nvidia.com> Co-authored-by: Andy Grove <andygrove@users.noreply.github.com> Co-authored-by: Jason Lowe <jlowe@nvidia.com> Co-authored-by: Robert (Bobby) Evans <bobby@apache.org>
…IDIA#918) Signed-off-by: spark-rapids automation <70000568+nvauto@users.noreply.github.com>
Signed-off-by: Andy Grove andygrove@nvidia.com
This PR adds a Python script for running a series of TPC-* benchmark queries with one or more configurations. See the documentation in the PR for more details.
The approach is purposely simplistic and we may want to make it smarter in the future but this allows us to leave benchmarks running unattended with minimal effort.
I will need to update the benchmark guide (this is in a separate PR) to cover this new utility.