Skip to content

Commit

Permalink
Remove support for databricks 7.0 runtime - shim spark300db (NVIDIA#1145
Browse files Browse the repository at this point in the history
)

* Remove databricks 3.0.0 shim layer

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Update docs

Signed-off-by: Thomas Graves <tgraves@nvidia.com>
  • Loading branch information
tgravescs authored and sperlingxx committed Nov 20, 2020
1 parent 99d89e9 commit aad21fe
Show file tree
Hide file tree
Showing 22 changed files with 9 additions and 1,726 deletions.
1 change: 0 additions & 1 deletion docs/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ top of these changes and release updates as quickly as possible.

The RAPIDS Accelerator for Apache Spark officially supports
[Apache Spark](get-started/getting-started-on-prem.md),
[Databricks Runtime 7.0](get-started/getting-started-databricks.md)
[Databricks Runtime 7.3](get-started/getting-started-databricks.md)
and [Google Cloud Dataproc](get-started/getting-started-gcp.md).
Most distributions based off of Apache Spark 3.0.0 should work, but because the plugin replaces
Expand Down
6 changes: 3 additions & 3 deletions docs/get-started/getting-started-databricks.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ parent: Getting-Started
This guide will run through how to set up the RAPIDS Accelerator for Apache Spark 3.0 on Databricks. At the end of this guide, the reader will be able to run a sample Apache Spark application that runs on NVIDIA GPUs on Databricks.

## Prerequisites
* Apache Spark 3.0 running in DataBricks Runtime 7.0 ML with GPU or Runtime 7.3 ML with GPU
* AWS: 7.0 ML (includes Apache Spark 3.0.0, GPU, Scala 2.12) or 7.3 LTS ML (includes Apache Spark 3.0.1, GPU, Scala 2.12)
* Azure: 7.0 ML (GPU, Scala 2.12, Spark 3.0.0) or 7.3 LTS ML (GPU, Scala 2.12, Spark 3.0.1)
* Apache Spark 3.0 running in DataBricks Runtime 7.3 ML with GPU
* AWS:7.3 LTS ML (includes Apache Spark 3.0.1, GPU, Scala 2.12)
* Azure: 7.3 LTS ML (GPU, Scala 2.12, Spark 3.0.1)

The number of GPUs per node dictates the number of Spark executors that can run in that node.

Expand Down
4 changes: 2 additions & 2 deletions integration_tests/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,9 @@
</properties>
<profiles>
<profile>
<id>spark300dbtests</id>
<id>spark301dbtests</id>
<properties>
<spark.test.version>3.0.0-databricks</spark.test.version>
<spark.test.version>3.0.1-databricks</spark.test.version>
</properties>
</profile>
<profile>
Expand Down
2 changes: 1 addition & 1 deletion jenkins/Jenkinsfile-blossom.premerge
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ pipeline {
step([$class : 'JacocoPublisher',
execPattern : '**/target/jacoco.exec',
classPattern : 'target/jacoco_classes/',
sourcePattern : 'shuffle-plugin/src/main/scala/,udf-compiler/src/main/scala/,sql-plugin/src/main/java/,sql-plugin/src/main/scala/,shims/spark310/src/main/scala/,shims/spark300/src/main/scala/,shims/spark300db/src/main/scala/,shims/spark301/src/main/scala/,shims/spark302/src/main/scala/',
sourcePattern : 'shuffle-plugin/src/main/scala/,udf-compiler/src/main/scala/,sql-plugin/src/main/java/,sql-plugin/src/main/scala/,shims/spark310/src/main/scala/,shims/spark300/src/main/scala/,shims/spark301db/src/main/scala/,shims/spark301/src/main/scala/,shims/spark302/src/main/scala/',
sourceInclusionPattern: '**/*.java,**/*.scala'
])
}
Expand Down
116 changes: 0 additions & 116 deletions jenkins/Jenkinsfile.databricksnightly

This file was deleted.

110 changes: 0 additions & 110 deletions jenkins/Jenkinsfile.databricksrelease

This file was deleted.

2 changes: 1 addition & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@
</build>
</profile>
<profile>
<id>databricks</id>
<id>databricks301</id>
<properties>
<rat.consoleOutput>true</rat.consoleOutput>
</properties>
Expand Down
17 changes: 0 additions & 17 deletions shims/aggregator/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -33,17 +33,6 @@
<version>0.3.0-SNAPSHOT</version>

<profiles>
<profile>
<id>databricks</id>
<dependencies>
<dependency>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims-spark300-databricks_${scala.binary.version}</artifactId>
<version>${project.version}</version>
<scope>compile</scope>
</dependency>
</dependencies>
</profile>
<profile>
<id>databricks301</id>
<dependencies>
Expand All @@ -59,12 +48,6 @@
<!-- use a separate profile to just pull databricks from maven repository without building it -->
<id>include-databricks</id>
<dependencies>
<dependency>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims-spark300-databricks_${scala.binary.version}</artifactId>
<version>${project.version}</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shims-spark301-databricks_${scala.binary.version}</artifactId>
Expand Down
6 changes: 0 additions & 6 deletions shims/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -33,12 +33,6 @@
<version>0.3.0-SNAPSHOT</version>

<profiles>
<profile>
<id>databricks</id>
<modules>
<module>spark300db</module>
</modules>
</profile>
<profile>
<id>databricks301</id>
<modules>
Expand Down
Loading

0 comments on commit aad21fe

Please sign in to comment.