Skip to content

Commit

Permalink
Additional functionality docs (NVIDIA#1621)
Browse files Browse the repository at this point in the history
Signed-off-by: Sameer Raheja <sraheja@nvidia.com>
  • Loading branch information
sameerz authored Jan 29, 2021
1 parent fe5c7de commit d932209
Show file tree
Hide file tree
Showing 10 changed files with 21 additions and 12 deletions.
4 changes: 2 additions & 2 deletions docs/FAQ.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
layout: page
title: Frequently Asked Questions
nav_order: 10
nav_order: 11
---
# Frequently Asked Questions

Expand Down Expand Up @@ -205,7 +205,7 @@ user-defined functions on the GPU:
#### RAPIDS-Accelerated UDFs

UDFs can provide a RAPIDS-accelerated implementation which allows the RAPIDS Accelerator to perform
the operation on the GPU. See the [RAPIDS-accelerated UDF documentation](../docs/rapids-udfs.md)
the operation on the GPU. See the [RAPIDS-accelerated UDF documentation](additional-functionality/rapids-udfs.md)
for details.

#### Automatic Translation of Scala UDFs to Apache Spark Operations
Expand Down
7 changes: 7 additions & 0 deletions docs/additional-functionality/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
layout: page
title: Additional Functionality
nav_order: 9
has_children: true
permalink: /additional-functionality/
---
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
---
layout: page
title: RAPIDS Cache Serializer
nav_order: 12
parent: Additional Functionality
nav_order: 2
---
# RAPIDS Cache Serializer
Apache Spark provides an important feature to cache intermediate data and provide
Expand Down Expand Up @@ -36,4 +37,4 @@ nav_order: 12
```
spark-shell --conf spark.sql.cache.serializer=com.nvidia.spark.rapids.shims.spark311.ParquetCachedBatchSerializer"
```
To use the default serializer don't set the `spark.sql.cache.serializer` conf
To use the default serializer don't set the `spark.sql.cache.serializer` conf
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
---
layout: page
title: ML Integration
nav_order: 8
parent: Additional Functionality
nav_order: 1
---
# RAPIDS Accelerator for Apache Spark ML Library Integration

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
---
layout: page
title: RAPIDS-Accelerated User-Defined Functions
nav_order: 9
parent: Additional Functionality
nav_order: 3
---
# RAPIDS-Accelerated User-Defined Functions

Expand Down
2 changes: 1 addition & 1 deletion docs/benchmarks.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
layout: page
title: Benchmarks
nav_exclude: true
nav_order: 8
---
# Benchmarks

Expand Down
2 changes: 1 addition & 1 deletion docs/dev/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
layout: page
title: Developer Overview
nav_order: 12
nav_order: 10
has_children: true
permalink: /developer-overview/
---
Expand Down
1 change: 0 additions & 1 deletion docs/dev/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,3 @@ parent: Developer Overview
An overview of testing can be found within the repository at:
* [Unit tests](https://github.com/NVIDIA/spark-rapids/tree/branch-0.3/tests)
* [Integration testing](https://github.com/NVIDIA/spark-rapids/tree/branch-0.3/integration_tests)
* [Benchmarks](../benchmarks.md)
4 changes: 2 additions & 2 deletions docs/examples.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
layout: page
title: Demos
nav_order: 11
title: Examples
nav_order: 12
---
# Demos

Expand Down
2 changes: 1 addition & 1 deletion docs/tuning-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ called [RMM](https://github.com/rapidsai/rmm) to mitigate this overhead. By defa
the plugin will allocate `90%` (`0.9`) of the memory on the GPU and keep it as a pool that can
be allocated from. If the pool is exhausted more memory will be allocated and added to the pool.
Most of the time this is a huge win, but if you need to share the GPU with other
[libraries](ml-integration.md) that are not aware of RMM this can lead to memory issues, and you
[libraries](additional-functionality/ml-integration.md) that are not aware of RMM this can lead to memory issues, and you
may need to disable pooling.

## Pinned Memory
Expand Down

0 comments on commit d932209

Please sign in to comment.