Skip to content

Commit

Permalink
Prepare for v0.4.1 release (NVIDIA#1990)
Browse files Browse the repository at this point in the history
Signed-off-by: Jason Lowe <jlowe@nvidia.com>
  • Loading branch information
jlowe authored Mar 23, 2021
1 parent 387d3bf commit d2543c5
Show file tree
Hide file tree
Showing 28 changed files with 97 additions and 62 deletions.
6 changes: 3 additions & 3 deletions api_validation/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
</parent>
<artifactId>rapids-4-spark-api-validation</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>

<profiles>
<profile>
Expand Down Expand Up @@ -78,7 +78,7 @@
<dependency>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims-aggregator_${scala.binary.version}</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
<scope>provided</scope>
</dependency>
</dependencies>
Expand Down
6 changes: 3 additions & 3 deletions dist/pom.xml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright (c) 2020, NVIDIA CORPORATION.
Copyright (c) 2020-2021, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand All @@ -22,13 +22,13 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
</parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark Distribution</name>
<description>Creates the distribution package of the RAPIDS plugin for Apache Spark</description>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>

<dependencies>
<dependency>
Expand Down
2 changes: 1 addition & 1 deletion docs/configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The following is the list of options that `rapids-plugin-4-spark` supports.
On startup use: `--conf [conf key]=[conf value]`. For example:

```
${SPARK_HOME}/bin/spark --jars 'rapids-4-spark_2.12-0.4.0.jar,cudf-0.18.1-cuda10-1.jar' \
${SPARK_HOME}/bin/spark --jars 'rapids-4-spark_2.12-0.4.1.jar,cudf-0.18.1-cuda10-1.jar' \
--conf spark.plugins=com.nvidia.spark.SQLPlugin \
--conf spark.rapids.sql.incompatibleOps.enabled=true
```
Expand Down
2 changes: 1 addition & 1 deletion docs/demo/Databricks/generate-init-script.ipynb
Original file line number Diff line number Diff line change
@@ -1 +1 @@
{"cells":[{"cell_type":"code","source":["dbutils.fs.mkdirs(\"dbfs:/databricks/init_scripts/\")\n \ndbutils.fs.put(\"/databricks/init_scripts/init.sh\",\"\"\"\n#!/bin/bash\nsudo wget -O /databricks/jars/rapids-4-spark_2.12-0.4.0.jar https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/0.4.0/rapids-4-spark_2.12-0.4.0.jar\nsudo wget -O /databricks/jars/cudf-0.18.1-cuda10-1.jar https://repo1.maven.org/maven2/ai/rapids/cudf/0.18.1/cudf-0.18.1-cuda10-1.jar\"\"\", True)"],"metadata":{},"outputs":[],"execution_count":1},{"cell_type":"code","source":["%sh\ncd ../../dbfs/databricks/init_scripts\npwd\nls -ltr\ncat init.sh"],"metadata":{},"outputs":[],"execution_count":2},{"cell_type":"code","source":[""],"metadata":{},"outputs":[],"execution_count":3}],"metadata":{"name":"generate-init-script","notebookId":2645746662301564},"nbformat":4,"nbformat_minor":0}
{"cells":[{"cell_type":"code","source":["dbutils.fs.mkdirs(\"dbfs:/databricks/init_scripts/\")\n \ndbutils.fs.put(\"/databricks/init_scripts/init.sh\",\"\"\"\n#!/bin/bash\nsudo wget -O /databricks/jars/rapids-4-spark_2.12-0.4.1.jar https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/0.4.1/rapids-4-spark_2.12-0.4.1.jar\nsudo wget -O /databricks/jars/cudf-0.18.1-cuda10-1.jar https://repo1.maven.org/maven2/ai/rapids/cudf/0.18.1/cudf-0.18.1-cuda10-1.jar\"\"\", True)"],"metadata":{},"outputs":[],"execution_count":1},{"cell_type":"code","source":["%sh\ncd ../../dbfs/databricks/init_scripts\npwd\nls -ltr\ncat init.sh"],"metadata":{},"outputs":[],"execution_count":2},{"cell_type":"code","source":[""],"metadata":{},"outputs":[],"execution_count":3}],"metadata":{"name":"generate-init-script","notebookId":2645746662301564},"nbformat":4,"nbformat_minor":0}
35 changes: 35 additions & 0 deletions docs/download.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,41 @@ title: Download
nav_order: 3
---

## Release v0.4.1

This is a patch release based on version 0.4.0 with the following additional fixes:
* Broadcast exchange can fail when job group is set

The release is supported on Apache Spark 3.0.0, 3.0.1, 3.0.2, 3.1.1, Databricks 7.3 ML LTS and
Google Cloud Platform Dataproc 2.0.

The list of all supported operations is provided [here](supported_ops.md).

For a detailed list of changes, please refer to the
[CHANGELOG](https://github.com/NVIDIA/spark-rapids/blob/main/CHANGELOG.md).

Hardware Requirements:

GPU Architecture: NVIDIA Pascal™ or better (Tested on V100, T4 and A100 GPU)

Software Requirements:

OS: Ubuntu 16.04, Ubuntu 18.04 or CentOS 7

CUDA & Nvidia Drivers: 10.1.2 & v418.87+, 10.2 & v440.33+ or 11.0 & v450.36+

Apache Spark 3.0, 3.0.1, 3.0.2, 3.1.1, Databricks 7.3 ML LTS Runtime, or GCP Dataproc 2.0

Apache Hadoop 2.10+ or 3.1.1+ (3.1.1 for nvidia-docker version 2)

Python 3.6+, Scala 2.12, Java 8

### Download v0.4.1
* [RAPIDS Spark Package](https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/0.4.1/rapids-4-spark_2.12-0.4.1.jar)
* [cuDF 11.0 Package](https://repo1.maven.org/maven2/ai/rapids/cudf/0.18.1/cudf-0.18.1-cuda11.jar)
* [cuDF 10.2 Package](https://repo1.maven.org/maven2/ai/rapids/cudf/0.18.1/cudf-0.18.1-cuda10-2.jar)
* [cuDF 10.1 Package](https://repo1.maven.org/maven2/ai/rapids/cudf/0.18.1/cudf-0.18.1-cuda10-1.jar)

## Release v0.4.0

New functionality for the release includes
Expand Down
2 changes: 1 addition & 1 deletion docs/get-started/Dockerfile.cuda
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ COPY spark-3.0.1-bin-hadoop3.2/kubernetes/tests /opt/spark/tests
COPY spark-3.0.1-bin-hadoop3.2/data /opt/spark/data

COPY cudf-0.18.1-cuda10-1.jar /opt/sparkRapidsPlugin
COPY rapids-4-spark_2.12-0.4.0.jar /opt/sparkRapidsPlugin
COPY rapids-4-spark_2.12-0.4.1.jar /opt/sparkRapidsPlugin
COPY getGpusResources.sh /opt/sparkRapidsPlugin

RUN mkdir /opt/spark/python
Expand Down
6 changes: 3 additions & 3 deletions docs/get-started/getting-started-on-prem.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,15 +56,15 @@ CUDA and will not run on other versions. The jars use a maven classifier to keep

For example, here is a sample version of the jars and cudf with CUDA 10.1 support:
- cudf-0.18.1-cuda10-1.jar
- rapids-4-spark_2.12-0.4.0.jar
- rapids-4-spark_2.12-0.4.1.jar


For simplicity export the location to these jars. This example assumes the sample jars above have
been placed in the `/opt/sparkRapidsPlugin` directory:
```shell
export SPARK_RAPIDS_DIR=/opt/sparkRapidsPlugin
export SPARK_CUDF_JAR=${SPARK_RAPIDS_DIR}/cudf-0.18.1-cuda10-1.jar
export SPARK_RAPIDS_PLUGIN_JAR=${SPARK_RAPIDS_DIR}/rapids-4-spark_2.12-0.4.0.jar
export SPARK_RAPIDS_PLUGIN_JAR=${SPARK_RAPIDS_DIR}/rapids-4-spark_2.12-0.4.1.jar
```

## Install the GPU Discovery Script
Expand Down Expand Up @@ -437,7 +437,7 @@ To enable _GPU Scheduling for Pandas UDF_, you need to configure your spark job
On Standalone, you need to add
```shell
...
--conf spark.executorEnv.PYTHONPATH=rapids-4-spark_2.12-0.4.0.jar \
--conf spark.executorEnv.PYTHONPATH=rapids-4-spark_2.12-0.4.1.jar \
--py-files ${SPARK_RAPIDS_PLUGIN_JAR}
```

Expand Down
6 changes: 3 additions & 3 deletions integration_tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ individually, so you don't risk running unit tests along with the integration te
http://www.scalatest.org/user_guide/using_the_scalatest_shell

```shell
spark-shell --jars rapids-4-spark-tests_2.12-0.4.0-tests.jar,rapids-4-spark-udf-examples_2.12-0.4.0,rapids-4-spark-integration-tests_2.12-0.4.0-tests.jar,scalatest_2.12-3.0.5.jar,scalactic_2.12-3.0.5.jar
spark-shell --jars rapids-4-spark-tests_2.12-0.4.1-tests.jar,rapids-4-spark-udf-examples_2.12-0.4.1,rapids-4-spark-integration-tests_2.12-0.4.1-tests.jar,scalatest_2.12-3.0.5.jar,scalactic_2.12-3.0.5.jar
```

First you import the `scalatest_shell` and tell the tests where they can find the test files you
Expand All @@ -131,7 +131,7 @@ If you just want to verify the SQL replacement is working you will need to add t
example assumes CUDA 10.1 is being used.

```
$SPARK_HOME/bin/spark-submit --jars "rapids-4-spark_2.12-0.4.0.jar,rapids-4-spark-udf-examples_2.12-0.4.0.jar,cudf-0.18.1-cuda10-1.jar" ./runtests.py
$SPARK_HOME/bin/spark-submit --jars "rapids-4-spark_2.12-0.4.1.jar,rapids-4-spark-udf-examples_2.12-0.4.1.jar,cudf-0.18.1-cuda10-1.jar" ./runtests.py
```

You don't have to enable the plugin for this to work, the test framework will do that for you.
Expand Down Expand Up @@ -192,7 +192,7 @@ To run cudf_udf tests, need following configuration changes:
As an example, here is the `spark-submit` command with the cudf_udf parameter on CUDA 10.1:

```
$SPARK_HOME/bin/spark-submit --jars "rapids-4-spark_2.12-0.4.0.jar,rapids-4-spark-udf-examples_2.12-0.4.0.jar,cudf-0.18.1-cuda10-1.jar,rapids-4-spark-tests_2.12-0.4.0.jar" --conf spark.rapids.memory.gpu.allocFraction=0.3 --conf spark.rapids.python.memory.gpu.allocFraction=0.3 --conf spark.rapids.python.concurrentPythonWorkers=2 --py-files "rapids-4-spark_2.12-0.4.0.jar" --conf spark.executorEnv.PYTHONPATH="rapids-4-spark_2.12-0.4.0.jar" ./runtests.py --cudf_udf
$SPARK_HOME/bin/spark-submit --jars "rapids-4-spark_2.12-0.4.1.jar,rapids-4-spark-udf-examples_2.12-0.4.1.jar,cudf-0.18.1-cuda10-1.jar,rapids-4-spark-tests_2.12-0.4.1.jar" --conf spark.rapids.memory.gpu.allocFraction=0.3 --conf spark.rapids.python.memory.gpu.allocFraction=0.3 --conf spark.rapids.python.concurrentPythonWorkers=2 --py-files "rapids-4-spark_2.12-0.4.1.jar" --conf spark.executorEnv.PYTHONPATH="rapids-4-spark_2.12-0.4.1.jar" ./runtests.py --cudf_udf
```

## Writing tests
Expand Down
4 changes: 2 additions & 2 deletions integration_tests/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,11 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
</parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-integration-tests_2.12</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>

<properties>
<spark.test.version>${spark300.version}</spark.test.version>
Expand Down
4 changes: 2 additions & 2 deletions jenkins/databricks/create.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (c) 2020, NVIDIA CORPORATION.
# Copyright (c) 2020-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -27,7 +27,7 @@ def main():
workspace = 'https://dbc-9ff9942e-a9c4.cloud.databricks.com'
token = ''
sshkey = ''
cluster_name = 'CI-GPU-databricks-0.4.0-SNAPSHOT'
cluster_name = 'CI-GPU-databricks-0.4.1-SNAPSHOT'
idletime = 240
runtime = '7.0.x-gpu-ml-scala2.12'
num_workers = 1
Expand Down
6 changes: 3 additions & 3 deletions jenkins/version-def.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#!/bin/bash
#
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
# Copyright (c) 2020-2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -28,8 +28,8 @@ IFS=$PRE_IFS

CUDF_VER=${CUDF_VER:-"0.18.1"}
CUDA_CLASSIFIER=${CUDA_CLASSIFIER:-"cuda10-1"}
PROJECT_VER=${PROJECT_VER:-"0.4.0-SNAPSHOT"}
PROJECT_TEST_VER=${PROJECT_TEST_VER:-"0.4.0-SNAPSHOT"}
PROJECT_VER=${PROJECT_VER:-"0.4.1-SNAPSHOT"}
PROJECT_TEST_VER=${PROJECT_TEST_VER:-"0.4.1-SNAPSHOT"}
SPARK_VER=${SPARK_VER:-"3.0.0"}
SCALA_BINARY_VER=${SCALA_BINARY_VER:-"2.12"}
SERVER_ID=${SERVER_ID:-"snapshots"}
Expand Down
2 changes: 1 addition & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
<artifactId>rapids-4-spark-parent</artifactId>
<name>RAPIDS Accelerator for Apache Spark Root Project</name>
<description>The root project of the RAPIDS Accelerator for Apache Spark</description>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
<packaging>pom</packaging>

<url>https://github.com/NVIDIA</url>
Expand Down
4 changes: 2 additions & 2 deletions shims/aggregator/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims_2.12</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims-aggregator_2.12</artifactId>
<packaging>jar</packaging>
<name>RAPIDS Accelerator for Apache Spark SQL Plugin Shim Aggregator</name>
<description>The RAPIDS SQL plugin for Apache Spark Shim Aggregator</description>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>

<profiles>
<profile>
Expand Down
4 changes: 2 additions & 2 deletions shims/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-parent</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims_2.12</artifactId>
<packaging>pom</packaging>
<name>RAPIDS Accelerator for Apache Spark SQL Plugin Shims</name>
<description>The RAPIDS SQL plugin for Apache Spark Shims</description>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>

<profiles>
<profile>
Expand Down
6 changes: 3 additions & 3 deletions shims/spark300/pom.xml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright (c) 2020, NVIDIA CORPORATION.
Copyright (c) 2020-2021, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand All @@ -22,14 +22,14 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims_2.12</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims-spark300_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark SQL Plugin Spark 3.0.0 Shim</name>
<description>The RAPIDS SQL plugin for Apache Spark 3.0.0 Shim</description>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>

<!-- Set 'spark.version' for the shims layer -->
<!-- Create a separate file 'SPARK_VER.properties' in the jar to save cudf & spark version info -->
Expand Down
6 changes: 3 additions & 3 deletions shims/spark300emr/pom.xml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright (c) 2020, NVIDIA CORPORATION.
Copyright (c) 2020-2021, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand All @@ -22,14 +22,14 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims_2.12</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims-spark300emr_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark SQL Plugin Spark 3.0.0 EMR Shim</name>
<description>The RAPIDS SQL plugin for Apache Spark 3.0.0 EMR Shim</description>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>

<!-- Set 'spark.version' for the shims layer -->
<!-- Create a separate file 'SPARK_VER.properties' in the jar to save cudf & spark version info -->
Expand Down
6 changes: 3 additions & 3 deletions shims/spark301/pom.xml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright (c) 2020, NVIDIA CORPORATION.
Copyright (c) 2020-2021, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand All @@ -22,14 +22,14 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims_2.12</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims-spark301_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark SQL Plugin Spark 3.0.1 Shim</name>
<description>The RAPIDS SQL plugin for Apache Spark 3.0.1 Shim</description>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>

<!-- Set 'spark.version' for the shims layer -->
<!-- Create a separate file 'SPARK_VER.properties' in the jar to save cudf & spark version info -->
Expand Down
4 changes: 2 additions & 2 deletions shims/spark301db/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,14 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims_2.12</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims-spark301-databricks_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark SQL Plugin Spark 3.0.1 Databricks Shim</name>
<description>The RAPIDS SQL plugin for Apache Spark 3.0.1 Databricks Shim</description>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>

<!-- Set 'spark.version' for the shims layer -->
<!-- Create a separate file 'SPARK_VER.properties' in the jar to save cudf & spark version info -->
Expand Down
6 changes: 3 additions & 3 deletions shims/spark301emr/pom.xml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright (c) 2020, NVIDIA CORPORATION.
Copyright (c) 2020-2021, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand All @@ -22,14 +22,14 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims_2.12</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims-spark301emr_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark SQL Plugin Spark 3.0.1 EMR Shim</name>
<description>The RAPIDS SQL plugin for Apache Spark 3.0.1 EMR Shim</description>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>

<!-- Set 'spark.version' for the shims layer -->
<!-- Create a separate file 'SPARK_VER.properties' in the jar to save cudf & spark version info -->
Expand Down
6 changes: 3 additions & 3 deletions shims/spark302/pom.xml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright (c) 2020, NVIDIA CORPORATION.
Copyright (c) 2020-2021, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand All @@ -22,14 +22,14 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims_2.12</artifactId>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims-spark302_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark SQL Plugin Spark 3.0.2 Shim</name>
<description>The RAPIDS SQL plugin for Apache Spark 3.0.2 Shim</description>
<version>0.4.0-SNAPSHOT</version>
<version>0.4.1-SNAPSHOT</version>

<!-- Set 'spark.version' for the shims layer -->
<!-- Create a separate file 'SPARK_VER.properties' in the jar to save cudf & spark version info -->
Expand Down
Loading

0 comments on commit d2543c5

Please sign in to comment.