diff --git a/api_validation/pom.xml b/api_validation/pom.xml
index d8798cacbd7..050b83202eb 100644
--- a/api_validation/pom.xml
+++ b/api_validation/pom.xml
@@ -22,10 +22,10 @@
com.nvidia
rapids-4-spark-parent
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT
rapids-4-spark-api-validation
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT
diff --git a/dist/pom.xml b/dist/pom.xml
index 824975fb5a2..4312f51ef3a 100644
--- a/dist/pom.xml
+++ b/dist/pom.xml
@@ -22,13 +22,13 @@
com.nvidia
rapids-4-spark-parent
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT
com.nvidia
rapids-4-spark_2.12
RAPIDS Accelerator for Apache Spark Distribution
Creates the distribution package of the RAPIDS plugin for Apache Spark
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT
diff --git a/docs/configs.md b/docs/configs.md
index ca0c4bada00..bf6f8bf9824 100644
--- a/docs/configs.md
+++ b/docs/configs.md
@@ -10,7 +10,7 @@ The following is the list of options that `rapids-plugin-4-spark` supports.
On startup use: `--conf [conf key]=[conf value]`. For example:
```
-${SPARK_HOME}/bin/spark --jars 'rapids-4-spark_2.12-0.1-SNAPSHOT.jar,cudf-0.14-SNAPSHOT-cuda10.jar' \
+${SPARK_HOME}/bin/spark --jars 'rapids-4-spark_2.12-0.2.0-SNAPSHOT.jar,cudf-0.15-SNAPSHOT-cuda10-1.jar' \
--conf spark.plugins=com.nvidia.spark.SQLPlugin \
--conf spark.rapids.sql.incompatibleOps.enabled=true
```
diff --git a/docs/getting-started.md b/docs/getting-started.md
index 0297e3aecea..a1d355b315e 100644
--- a/docs/getting-started.md
+++ b/docs/getting-started.md
@@ -108,15 +108,15 @@ CUDA and will not run on other versions. The jars use a maven classifier to keep
- CUDA 10.2 => classifier cuda10-2
For example, here is a sample version of the jars and cudf with CUDA 10.1 support:
-- cudf-0.14-cuda10-1.jar
+- cudf-0.15-SNAPSHOT-cuda10-1.jar
- rapids-4-spark_2.12-0.1.0.jar
For simplicity export the location to these jars. This example assumes the sample jars above have
been placed in the `/opt/sparkRapidsPlugin` directory:
```shell
export SPARK_RAPIDS_DIR=/opt/sparkRapidsPlugin
-export SPARK_CUDF_JAR=${SPARK_RAPIDS_DIR}/cudf-0.14-cuda10-1.jar
-export SPARK_RAPIDS_PLUGIN_JAR=${SPARK_RAPIDS_DIR}/rapids-4-spark_2.12-0.1.0.jar
+export SPARK_CUDF_JAR=${SPARK_RAPIDS_DIR}/cudf-0.15-SNAPSHOT-cuda10-1.jar
+export SPARK_RAPIDS_PLUGIN_JAR=${SPARK_RAPIDS_DIR}/rapids-4-spark_2.12-0.2.0-SNAPSHOT.jar
```
## Install the GPU Discovery Script
diff --git a/docs/testing.md b/docs/testing.md
index b48bbe780c2..a50af4a55bb 100644
--- a/docs/testing.md
+++ b/docs/testing.md
@@ -20,7 +20,7 @@ we typically run with the default options and only increase the scale factor dep
dbgen -b dists.dss -s 10
```
-You can include the test jar `rapids-4-spark-integration-tests_2.12-0.1-SNAPSHOT.jar` with the
+You can include the test jar `rapids-4-spark-integration-tests_2.12-0.2.0-SNAPSHOT.jar` with the
Spark --jars option to get the TPCH tests. To setup for the queries you can run
`TpchLikeSpark.setupAllCSV` for CSV formatted data or `TpchLikeSpark.setupAllParquet`
for parquet formatted data. Both of those take the Spark session, and a path to the dbgen
@@ -77,7 +77,7 @@ individually, so you don't risk running unit tests along with the integration te
http://www.scalatest.org/user_guide/using_the_scalatest_shell
```shell
-spark-shell --jars rapids-4-spark-tests_2.12-0.1-SNAPSHOT-tests.jar,rapids-4-spark-integration-tests_2.12-0.1-SNAPSHOT-tests.jar,scalatest_2.12-3.0.5.jar,scalactic_2.12-3.0.5.jar
+spark-shell --jars rapids-4-spark-tests_2.12-0.2.0-SNAPSHOT-tests.jar,rapids-4-spark-integration-tests_2.12-0.2.0-SNAPSHOT-tests.jar,scalatest_2.12-3.0.5.jar,scalactic_2.12-3.0.5.jar
```
First you import the `scalatest_shell` and tell the tests where they can find the test files you
diff --git a/integration_tests/README.md b/integration_tests/README.md
index ab3a08716d9..5ff7e51ef4e 100644
--- a/integration_tests/README.md
+++ b/integration_tests/README.md
@@ -39,7 +39,7 @@ Most clusters probably will not have the RAPIDS plugin installed in the cluster
If just want to verify the SQL replacement is working you will need to add the `rapids-4-spark` and `cudf` jars to your `spark-submit` command.
```
-$SPARK_HOME/bin/spark-submit --jars "rapids-4-spark_2.12-0.1-SNAPSHOT.jar,cudf-0.14.jar" ./runtests.py
+$SPARK_HOME/bin/spark-submit --jars "rapids-4-spark_2.12-0.2.0-SNAPSHOT.jar,cudf-0.15-SNAPSHOT.jar" ./runtests.py
```
You don't have to enable the plugin for this to work, the test framework will do that for you.
@@ -70,7 +70,7 @@ The TPCxBB, TPCH, and Mortgage tests in this framework can be enabled by providi
As an example, here is the `spark-submit` command with the TPCxBB parameters:
```
-$SPARK_HOME/bin/spark-submit --jars "rapids-4-spark_2.12-0.1-SNAPSHOT.jar,cudf-0.14.jar,rapids-4-spark-tests_2.12-0.1-SNAPSHOT.jar" ./runtests.py --tpcxbb_format="csv" --tpcxbb_path="/path/to/tpcxbb/csv"
+$SPARK_HOME/bin/spark-submit --jars "rapids-4-spark_2.12-0.2.0-SNAPSHOT.jar,cudf-0.15-SNAPSHOT.jar,rapids-4-spark-tests_2.12-0.2.0-SNAPSHOT.jar" ./runtests.py --tpcxbb_format="csv" --tpcxbb_path="/path/to/tpcxbb/csv"
```
## Writing tests
diff --git a/integration_tests/pom.xml b/integration_tests/pom.xml
index 461fd4b311c..564584448e1 100644
--- a/integration_tests/pom.xml
+++ b/integration_tests/pom.xml
@@ -22,11 +22,11 @@
com.nvidia
rapids-4-spark-parent
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT
com.nvidia
rapids-4-spark-integration-tests_2.12
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT
diff --git a/jenkins/Jenkinsfile.integration b/jenkins/Jenkinsfile.integration
index 2e4cecc8ea5..266a320828a 100644
--- a/jenkins/Jenkinsfile.integration
+++ b/jenkins/Jenkinsfile.integration
@@ -32,17 +32,17 @@ pipeline {
}
parameters {
- string(name: 'CUDF_VER', defaultValue: '0.14',
+ string(name: 'CUDF_VER', defaultValue: '0.15-SNAPSHOT',
description: '-Dcudf.version= \n\n Default for cudf version')
string(name: 'CUDA_CLASSIFIER', defaultValue: '',
description: '-Dclassifier=\n\n cuda10-1, cuda10-2, EMPTY as cuda10-1')
- string(name: 'PROJECT_VER', defaultValue: '0.1-SNAPSHOT',
- description: 'Default project version 0.1-SNAPSHOT')
+ string(name: 'PROJECT_VER', defaultValue: '0.2.0-SNAPSHOT',
+ description: 'Default project version 0.2.0-SNAPSHOT')
string(name: 'SPARK_VER', defaultValue: '3.0.0',
description: 'Default spark version 3.0.0')
string(name: 'SERVER_URL', defaultValue: 'https://urm.nvidia.com:443/artifactory/sw-spark-maven',
description: 'Default maven repo URL where to download Spark3.0 tar file.')
- string(name: 'REF', defaultValue: 'branch-0.1', description: 'Commit to build')
+ string(name: 'REF', defaultValue: 'branch-0.2', description: 'Commit to build')
}
environment {
diff --git a/jenkins/Jenkinsfile.nightly b/jenkins/Jenkinsfile.nightly
index c28454df710..1a6bc0bfe80 100644
--- a/jenkins/Jenkinsfile.nightly
+++ b/jenkins/Jenkinsfile.nightly
@@ -63,7 +63,7 @@ pipeline {
-v ${HOME}/.zinc:${HOME}/.zinc:rw \
-v /etc/passwd:/etc/passwd -v /etc/group:/etc/group") {
sh "mvn -U -B clean deploy $MVN_URM_MIRROR"
- sh "jenkins/printJarVersion.sh 'CUDFVersion' '${HOME}/.m2/repository/ai/rapids/cudf/0.14' 'cudf-0.14' '-cuda10-1.jar'"
+ sh "jenkins/printJarVersion.sh 'CUDFVersion' '${HOME}/.m2/repository/ai/rapids/cudf/0.15-SNAPSHOT' 'cudf-0.15-SNAPSHOT' '-cuda10-1.jar'"
sh "jenkins/printJarVersion.sh 'SPARKVersion' '${HOME}/.m2/repository/org/apache/spark/spark-core_2.12/3.0.0' 'spark-core_2.12-3.0.0-' '.jar'"
}
}
@@ -78,7 +78,7 @@ pipeline {
build(job: 'spark/rapids_integration-0.1-github',
propagate: false,
parameters: [string(name: 'REF', value: 'branch-0.1'),
- string(name: 'CUDF_VER', value: '0.14'),
+ string(name: 'CUDF_VER', value: '0.15-SNAPSHOT'),
booleanParam(name: 'BUILD_CENTOS7', value: false),])
slack("#rapidsai-spark-cicd", "Success", color: "#33CC33")
diff --git a/jenkins/databricks/build.sh b/jenkins/databricks/build.sh
index c8827f18811..a393a053238 100755
--- a/jenkins/databricks/build.sh
+++ b/jenkins/databricks/build.sh
@@ -80,7 +80,7 @@ mvn install:install-file \
mvn -Pdatabricks clean verify -DskipTests
# copy so we pick up new built jar
-sudo cp dist/target/rapids-4-spark_2.12-*-SNAPSHOT.jar /databricks/jars/rapids-4-spark_2.12-0.1-SNAPSHOT-ci.jar
+sudo cp dist/target/rapids-4-spark_2.12-*-SNAPSHOT.jar /databricks/jars/rapids-4-spark_2.12-0.2.0-SNAPSHOT-ci.jar
# tests
export PATH=/databricks/conda/envs/databricks-ml-gpu/bin:/databricks/conda/condabin:$PATH
diff --git a/jenkins/spark-tests.sh b/jenkins/spark-tests.sh
index 0b8f7f054a2..de860235710 100755
--- a/jenkins/spark-tests.sh
+++ b/jenkins/spark-tests.sh
@@ -17,11 +17,11 @@
set -ex
if [ "$CUDF_VER"x == x ];then
- CUDF_VER="0.14"
+ CUDF_VER="0.15-SNAPSHOT"
fi
if [ "$PROJECT_VER"x == x ];then
- PROJECT_VER="0.1-SNAPSHOT"
+ PROJECT_VER="0.2.0-SNAPSHOT"
fi
if [ "$SPARK_VER"x == x ];then
diff --git a/pom.xml b/pom.xml
index 362542c3113..5949a616f94 100644
--- a/pom.xml
+++ b/pom.xml
@@ -23,7 +23,7 @@
rapids-4-spark-parent
RAPIDS Accelerator for Apache Spark Root Project
The root project of the RAPIDS Accelerator for Apache Spark
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT
pom
https://github.com/NVIDIA
@@ -132,7 +132,7 @@
1.8
3.0.0
cuda10-1
- 0.14
+ 0.15-SNAPSHOT
2.12
2.12.8
1.5.8
diff --git a/shuffle-plugin/pom.xml b/shuffle-plugin/pom.xml
index 9e912aa344d..ceb5810801e 100644
--- a/shuffle-plugin/pom.xml
+++ b/shuffle-plugin/pom.xml
@@ -22,14 +22,14 @@
com.nvidia
rapids-4-spark-parent
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT
com.nvidia
rapids-4-spark-shuffle_2.12
RAPIDS Accelerator for Apache Spark Shuffle Plugin
Accelerated shuffle plugin for the RAPIDS plugin for Apache Spark
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT
diff --git a/sql-plugin/pom.xml b/sql-plugin/pom.xml
index 6f9ce765d37..f7465d8efc8 100644
--- a/sql-plugin/pom.xml
+++ b/sql-plugin/pom.xml
@@ -22,13 +22,13 @@
com.nvidia
rapids-4-spark-parent
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT
com.nvidia
rapids-4-spark-sql_2.12
RAPIDS Accelerator for Apache Spark SQL Plugin
The RAPIDS SQL plugin for Apache Spark
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT
diff --git a/sql-plugin/src/main/scala/com/nvidia/spark/rapids/RapidsConf.scala b/sql-plugin/src/main/scala/com/nvidia/spark/rapids/RapidsConf.scala
index 502cf3c53bd..319c644d6a8 100644
--- a/sql-plugin/src/main/scala/com/nvidia/spark/rapids/RapidsConf.scala
+++ b/sql-plugin/src/main/scala/com/nvidia/spark/rapids/RapidsConf.scala
@@ -612,7 +612,7 @@ object RapidsConf {
|On startup use: `--conf [conf key]=[conf value]`. For example:
|
|```
- |${SPARK_HOME}/bin/spark --jars 'rapids-4-spark_2.12-0.1-SNAPSHOT.jar,cudf-0.14-SNAPSHOT-cuda10.jar' \
+ |${SPARK_HOME}/bin/spark --jars 'rapids-4-spark_2.12-0.2.0-SNAPSHOT.jar,cudf-0.15-SNAPSHOT-cuda10-1.jar' \
|--conf spark.plugins=com.nvidia.spark.SQLPlugin \
|--conf spark.rapids.sql.incompatibleOps.enabled=true
|```
diff --git a/tests/pom.xml b/tests/pom.xml
index 365900f7288..62e56f603ec 100644
--- a/tests/pom.xml
+++ b/tests/pom.xml
@@ -22,13 +22,13 @@
com.nvidia
rapids-4-spark-parent
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT
com.nvidia
rapids-4-spark-tests_2.12
RAPIDS Accelerator for Apache Spark Tests
RAPIDS plugin for Apache Spark integration tests
- 0.1-SNAPSHOT
+ 0.2.0-SNAPSHOT