Skip to content

Commit

Permalink
Init version 22.08.0-SNAPSHOT (NVIDIA#5647)
Browse files Browse the repository at this point in the history
Signed-off-by: Peixin Li <pxli@nyu.edu>
  • Loading branch information
pxLi authored and HaoYang670 committed Jun 6, 2022
1 parent 2cc7136 commit bf00ddb
Show file tree
Hide file tree
Showing 27 changed files with 55 additions and 55 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ as a `provided` dependency.
<dependency>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark_2.12</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
<scope>provided</scope>
</dependency>
```
4 changes: 2 additions & 2 deletions aggregator/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
</parent>
<artifactId>rapids-4-spark-aggregator_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark Aggregator</name>
<description>Creates an aggregated shaded package of the RAPIDS plugin for Apache Spark</description>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>

<properties>
<!--
Expand Down
4 changes: 2 additions & 2 deletions api_validation/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
</parent>
<artifactId>rapids-4-spark-api-validation</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>

<profiles>
<profile>
Expand Down
4 changes: 2 additions & 2 deletions common/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,13 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
</parent>

<artifactId>rapids-4-spark-common_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark Common</name>
<description>Utility code that is common across the RAPIDS Accelerator projects</description>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>

<dependencies>
<dependency>
Expand Down
4 changes: 2 additions & 2 deletions dist/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
</parent>
<artifactId>rapids-4-spark_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark Distribution</name>
<description>Creates the distribution package of the RAPIDS plugin for Apache Spark</description>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>com.nvidia</groupId>
Expand Down
2 changes: 1 addition & 1 deletion docs/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -380,7 +380,7 @@ There are multiple reasons why this a problematic configuration:

Yes, but it requires support from the underlying cluster manager to isolate the MIG GPU instance
for each executor (e.g.: by setting `CUDA_VISIBLE_DEVICES`,
[YARN with docker isolation](https://github.com/NVIDIA/spark-rapids-examples/tree/branch-22.06/examples/MIG-Support)
[YARN with docker isolation](https://github.com/NVIDIA/spark-rapids-examples/tree/branch-22.08/examples/MIG-Support)
or other means).

Note that MIG is not recommended for use with the RAPIDS Accelerator since it significantly
Expand Down
2 changes: 1 addition & 1 deletion docs/additional-functionality/ml-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ access to any of the memory that RMM is holding.
## Spark ML Algorithms Supported by RAPIDS Accelerator

The [spark-rapids-examples repository](https://github.com/NVIDIA/spark-rapids-examples) provides a
[working example](https://github.com/NVIDIA/spark-rapids-examples/tree/branch-22.06/examples/ML+DL-Examples/Spark-cuML/pca)
[working example](https://github.com/NVIDIA/spark-rapids-examples/tree/branch-22.08/examples/ML+DL-Examples/Spark-cuML/pca)
of accelerating the `transform` API for
[Principal Component Analysis (PCA)](https://spark.apache.org/docs/latest/mllib-dimensionality-reduction#principal-component-analysis-pca).
The example leverages the [RAPIDS accelerated UDF interface](rapids-udfs.md) to provide a native
Expand Down
2 changes: 1 addition & 1 deletion docs/additional-functionality/rapids-udfs.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ type `DECIMAL64(scale=-2)`.
## RAPIDS Accelerated UDF Examples

<!-- Note: should update the branch name to tag when releasing-->
Source code for examples of RAPIDS accelerated UDFs is provided in the [udf-examples](https://github.com/NVIDIA/spark-rapids-examples/tree/branch-22.06/examples/UDF-Examples/RAPIDS-accelerated-UDFs) project.
Source code for examples of RAPIDS accelerated UDFs is provided in the [udf-examples](https://github.com/NVIDIA/spark-rapids-examples/tree/branch-22.08/examples/UDF-Examples/RAPIDS-accelerated-UDFs) project.

## GPU Support for Pandas UDF

Expand Down
2 changes: 1 addition & 1 deletion docs/configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The following is the list of options that `rapids-plugin-4-spark` supports.
On startup use: `--conf [conf key]=[conf value]`. For example:

```
${SPARK_HOME}/bin/spark --jars rapids-4-spark_2.12-22.06.0-SNAPSHOT-cuda11.jar \
${SPARK_HOME}/bin/spark --jars rapids-4-spark_2.12-22.08.0-SNAPSHOT-cuda11.jar \
--conf spark.plugins=com.nvidia.spark.SQLPlugin \
--conf spark.rapids.sql.incompatibleOps.enabled=true
```
Expand Down
12 changes: 6 additions & 6 deletions docs/dev/shims.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,16 +62,16 @@ Using JarURLConnection URLs we create a Parallel World of the current version wi

Spark 3.0.2's URLs:
```
jar:file:/home/spark/rapids-4-spark_2.12-22.06.0.jar!/
jar:file:/home/spark/rapids-4-spark_2.12-22.06.0.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-22.06.0.jar!/spark302/
jar:file:/home/spark/rapids-4-spark_2.12-22.08.0.jar!/
jar:file:/home/spark/rapids-4-spark_2.12-22.08.0.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-22.08.0.jar!/spark302/
```

Spark 3.2.0's URLs :
```
jar:file:/home/spark/rapids-4-spark_2.12-22.06.0.jar!/
jar:file:/home/spark/rapids-4-spark_2.12-22.06.0.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-22.06.0.jar!/spark320/
jar:file:/home/spark/rapids-4-spark_2.12-22.08.0.jar!/
jar:file:/home/spark/rapids-4-spark_2.12-22.08.0.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-22.08.0.jar!/spark320/
```

### Late Inheritance in Public Classes
Expand Down
4 changes: 2 additions & 2 deletions docs/get-started/getting-started-gcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,9 +186,9 @@ val (xgbClassificationModel, _) = benchmark("train") {
## Submit Spark jobs to a Dataproc Cluster Accelerated by GPUs
Similar to spark-submit for on-prem clusters, Dataproc supports a Spark application job to be
submitted as a Dataproc job. The mortgage examples we use above are also available as a [spark
application](https://github.com/NVIDIA/spark-rapids-examples/tree/branch-22.06/examples/XGBoost-Examples).
application](https://github.com/NVIDIA/spark-rapids-examples/tree/branch-22.08/examples/XGBoost-Examples).
After [building the jar
files](https://github.com/NVIDIA/spark-rapids-examples/blob/branch-22.06/docs/get-started/xgboost-examples/building-sample-apps/scala.md)
files](https://github.com/NVIDIA/spark-rapids-examples/blob/branch-22.08/docs/get-started/xgboost-examples/building-sample-apps/scala.md)
.

Place the jar file `sample_xgboost_apps-<version>-jar-with-dependencies.jar` under the
Expand Down
8 changes: 4 additions & 4 deletions docs/get-started/getting-started-on-prem.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,13 +53,13 @@ CUDA and will not run on other versions. The jars use a classifier to keep them
- CUDA 11.x => classifier cuda11

For example, here is a sample version of the jar with CUDA 11.x support:
- rapids-4-spark_2.12-22.06.0-SNAPSHOT-cuda11.jar
- rapids-4-spark_2.12-22.08.0-SNAPSHOT-cuda11.jar

For simplicity export the location to this jar. This example assumes the sample jar above has
been placed in the `/opt/sparkRapidsPlugin` directory:
```shell
export SPARK_RAPIDS_DIR=/opt/sparkRapidsPlugin
export SPARK_RAPIDS_PLUGIN_JAR=${SPARK_RAPIDS_DIR}/rapids-4-spark_2.12-22.06.0-SNAPSHOT-cuda11.jar
export SPARK_RAPIDS_PLUGIN_JAR=${SPARK_RAPIDS_DIR}/rapids-4-spark_2.12-22.08.0-SNAPSHOT-cuda11.jar
```

## Install the GPU Discovery Script
Expand Down Expand Up @@ -304,7 +304,7 @@ are using.
#### YARN version 3.3.0+
YARN version 3.3.0 and newer support a pluggable device framework which allows adding support for
MIG devices via a plugin. See
[NVIDIA GPU Plugin for YARN with MIG support for YARN 3.3.0+](https://github.com/NVIDIA/spark-rapids-examples/tree/branch-22.06/examples/MIG-Support/device-plugins/gpu-mig).
[NVIDIA GPU Plugin for YARN with MIG support for YARN 3.3.0+](https://github.com/NVIDIA/spark-rapids-examples/tree/branch-22.08/examples/MIG-Support/device-plugins/gpu-mig).
If you are using that plugin with a Spark version older than 3.2.1 and/or specifying the resource
as `nvidia/miggpu` you will also need to specify the config:

Expand All @@ -321,7 +321,7 @@ required.
If you are using YARN version from 3.1.2 up until 3.3.0, it requires making modifications to YARN
and deploying a version that adds support for MIG to the built-in YARN GPU resource plugin.

See [NVIDIA Support for GPU for YARN with MIG support for YARN 3.1.2 until YARN 3.3.0](https://github.com/NVIDIA/spark-rapids-examples/tree/branch-22.06/examples/MIG-Support/resource-types/gpu-mig)
See [NVIDIA Support for GPU for YARN with MIG support for YARN 3.1.2 until YARN 3.3.0](https://github.com/NVIDIA/spark-rapids-examples/tree/branch-22.08/examples/MIG-Support/resource-types/gpu-mig)
for details.

## Running on Kubernetes
Expand Down
6 changes: 3 additions & 3 deletions integration_tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -253,7 +253,7 @@ individually, so you don't risk running unit tests along with the integration te
http://www.scalatest.org/user_guide/using_the_scalatest_shell

```shell
spark-shell --jars rapids-4-spark-tests_2.12-22.06.0-SNAPSHOT-tests.jar,rapids-4-spark-integration-tests_2.12-22.06.0-SNAPSHOT-tests.jar,scalatest_2.12-3.0.5.jar,scalactic_2.12-3.0.5.jar
spark-shell --jars rapids-4-spark-tests_2.12-22.08.0-SNAPSHOT-tests.jar,rapids-4-spark-integration-tests_2.12-22.08.0-SNAPSHOT-tests.jar,scalatest_2.12-3.0.5.jar,scalactic_2.12-3.0.5.jar
```

First you import the `scalatest_shell` and tell the tests where they can find the test files you
Expand All @@ -276,7 +276,7 @@ If you just want to verify the SQL replacement is working you will need to add t
assumes CUDA 11.0 is being used.

```
$SPARK_HOME/bin/spark-submit --jars "rapids-4-spark_2.12-22.06.0-SNAPSHOT-cuda11.jar" ./runtests.py
$SPARK_HOME/bin/spark-submit --jars "rapids-4-spark_2.12-22.08.0-SNAPSHOT-cuda11.jar" ./runtests.py
```

You don't have to enable the plugin for this to work, the test framework will do that for you.
Expand Down Expand Up @@ -375,7 +375,7 @@ To run cudf_udf tests, need following configuration changes:
As an example, here is the `spark-submit` command with the cudf_udf parameter on CUDA 11.0:

```
$SPARK_HOME/bin/spark-submit --jars "rapids-4-spark_2.12-22.06.0-SNAPSHOT-cuda11.jar,rapids-4-spark-tests_2.12-22.06.0-SNAPSHOT.jar" --conf spark.rapids.memory.gpu.allocFraction=0.3 --conf spark.rapids.python.memory.gpu.allocFraction=0.3 --conf spark.rapids.python.concurrentPythonWorkers=2 --py-files "rapids-4-spark_2.12-22.06.0-SNAPSHOT-cuda11.jar" --conf spark.executorEnv.PYTHONPATH="rapids-4-spark_2.12-22.06.0-SNAPSHOT-cuda11.jar" ./runtests.py --cudf_udf
$SPARK_HOME/bin/spark-submit --jars "rapids-4-spark_2.12-22.08.0-SNAPSHOT-cuda11.jar,rapids-4-spark-tests_2.12-22.08.0-SNAPSHOT.jar" --conf spark.rapids.memory.gpu.allocFraction=0.3 --conf spark.rapids.python.memory.gpu.allocFraction=0.3 --conf spark.rapids.python.concurrentPythonWorkers=2 --py-files "rapids-4-spark_2.12-22.08.0-SNAPSHOT-cuda11.jar" --conf spark.executorEnv.PYTHONPATH="rapids-4-spark_2.12-22.08.0-SNAPSHOT-cuda11.jar" ./runtests.py --cudf_udf
```

### Enabling fuzz tests
Expand Down
4 changes: 2 additions & 2 deletions integration_tests/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
</parent>
<artifactId>rapids-4-spark-integration-tests_2.12</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
<properties>
<target.classifier/>
</properties>
Expand Down
2 changes: 1 addition & 1 deletion jenkins/databricks/create.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ def main():
workspace = 'https://dbc-9ff9942e-a9c4.cloud.databricks.com'
token = ''
sshkey = ''
cluster_name = 'CI-GPU-databricks-22.06.0-SNAPSHOT'
cluster_name = 'CI-GPU-databricks-22.08.0-SNAPSHOT'
idletime = 240
runtime = '7.0.x-gpu-ml-scala2.12'
num_workers = 1
Expand Down
2 changes: 1 addition & 1 deletion jenkins/databricks/init_cudf_udf.sh
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
# The initscript to set up environment for the cudf_udf tests on Databricks
# Will be automatically pushed into the dbfs:/databricks/init_scripts once it is updated.

CUDF_VER=${CUDF_VER:-22.06}
CUDF_VER=${CUDF_VER:-22.08}

# Need to explictly add conda into PATH environment, to activate conda environment.
export PATH=/databricks/conda/bin:$PATH
Expand Down
6 changes: 3 additions & 3 deletions jenkins/version-def.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@ for VAR in $OVERWRITE_PARAMS; do
done
IFS=$PRE_IFS

CUDF_VER=${CUDF_VER:-"22.06.0-SNAPSHOT"}
CUDF_VER=${CUDF_VER:-"22.08.0-SNAPSHOT"}
CUDA_CLASSIFIER=${CUDA_CLASSIFIER:-"cuda11"}
PROJECT_VER=${PROJECT_VER:-"22.06.0-SNAPSHOT"}
PROJECT_TEST_VER=${PROJECT_TEST_VER:-"22.06.0-SNAPSHOT"}
PROJECT_VER=${PROJECT_VER:-"22.08.0-SNAPSHOT"}
PROJECT_TEST_VER=${PROJECT_TEST_VER:-"22.08.0-SNAPSHOT"}
SPARK_VER=${SPARK_VER:-"3.1.1"}
# Make a best attempt to set the default value for the shuffle shim.
# Note that SPARK_VER for non-Apache Spark flavors (i.e. databricks,
Expand Down
4 changes: 2 additions & 2 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
<artifactId>rapids-4-spark-parent</artifactId>
<name>RAPIDS Accelerator for Apache Spark Root Project</name>
<description>The root project of the RAPIDS Accelerator for Apache Spark</description>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
<packaging>pom</packaging>

<url>https://nvidia.github.io/spark-rapids/</url>
Expand Down Expand Up @@ -817,7 +817,7 @@
<spark.test.version>${spark.version}</spark.test.version>
<spark.version.classifier>spark${buildver}</spark.version.classifier>
<cuda.version>cuda11</cuda.version>
<spark-rapids-jni.version>22.06.0-SNAPSHOT</spark-rapids-jni.version>
<spark-rapids-jni.version>22.08.0-SNAPSHOT</spark-rapids-jni.version>
<scala.binary.version>2.12</scala.binary.version>
<scala.recompileMode>incremental</scala.recompileMode>
<scala.version>2.12.15</scala.version>
Expand Down
4 changes: 2 additions & 2 deletions shuffle-plugin/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,13 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
</parent>

<artifactId>rapids-4-spark-shuffle_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark Shuffle Plugin</name>
<description>Accelerated shuffle plugin for the RAPIDS plugin for Apache Spark</description>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>

<dependencies>
<dependency>
Expand Down
4 changes: 2 additions & 2 deletions spark2-sql-plugin/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
</parent>
<artifactId>rapids-4-spark-sql-meta_2.11</artifactId>
<name>RAPIDS Accelerator for Apache Spark SQL Plugin Base Meta</name>
<description>The RAPIDS SQL plugin for Apache Spark Base Meta Information</description>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>

<properties>
<scala.binary.version>2.11</scala.binary.version>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1369,7 +1369,7 @@ object RapidsConf {
|On startup use: `--conf [conf key]=[conf value]`. For example:
|
|```
|$SPARK_HOME/bin/spark --jars 'rapids-4-spark_2.12-22.06.0-SNAPSHOT.jar,cudf-22.06.0-SNAPSHOT-cuda11.jar' \
|$SPARK_HOME/bin/spark --jars 'rapids-4-spark_2.12-22.08.0-SNAPSHOT.jar,cudf-22.08.0-SNAPSHOT-cuda11.jar' \
|--conf spark.plugins=com.nvidia.spark.SQLPlugin \
|--conf spark.rapids.sql.incompatibleOps.enabled=true
|```
Expand Down
4 changes: 2 additions & 2 deletions sql-plugin/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
</parent>
<artifactId>rapids-4-spark-sql_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark SQL Plugin</name>
<description>The RAPIDS SQL plugin for Apache Spark</description>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>

<dependencies>
<dependency>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1463,7 +1463,7 @@ object RapidsConf {
|On startup use: `--conf [conf key]=[conf value]`. For example:
|
|```
|${SPARK_HOME}/bin/spark --jars rapids-4-spark_2.12-22.06.0-SNAPSHOT-cuda11.jar \
|${SPARK_HOME}/bin/spark --jars rapids-4-spark_2.12-22.08.0-SNAPSHOT-cuda11.jar \
|--conf spark.plugins=com.nvidia.spark.SQLPlugin \
|--conf spark.rapids.sql.incompatibleOps.enabled=true
|```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,13 +53,13 @@ import org.apache.spark.util.MutableURLClassLoader
E.g., Spark 3.2.0 Shim will use
jar:file:/home/spark/rapids-4-spark_2.12-22.06.0.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-22.06.0.jar!/spark320/
jar:file:/home/spark/rapids-4-spark_2.12-22.08.0.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-22.08.0.jar!/spark320/
Spark 3.1.1 will use
jar:file:/home/spark/rapids-4-spark_2.12-22.06.0.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-22.06.0.jar!/spark311/
jar:file:/home/spark/rapids-4-spark_2.12-22.08.0.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-22.08.0.jar!/spark311/
Using these Jar URL's allows referencing different bytecode produced from identical sources
by incompatible Scala / Spark dependencies.
Expand Down
4 changes: 2 additions & 2 deletions tests/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
</parent>
<artifactId>rapids-4-spark-tests_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark Tests</name>
<description>RAPIDS plugin for Apache Spark integration tests</description>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>

<dependencies>
<dependency>
Expand Down
4 changes: 2 additions & 2 deletions tools/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,13 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
</parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-tools_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark tools</name>
<description>RAPIDS Accelerator for Apache Spark tools</description>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
<packaging>jar</packaging>

<properties>
Expand Down
4 changes: 2 additions & 2 deletions udf-compiler/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>
</parent>
<artifactId>rapids-4-spark-udf_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark Scala UDF Plugin</name>
<description>The RAPIDS Scala UDF plugin for Apache Spark</description>
<version>22.06.0-SNAPSHOT</version>
<version>22.08.0-SNAPSHOT</version>

<dependencies>
<dependency>
Expand Down

0 comments on commit bf00ddb

Please sign in to comment.