Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] update notes in download page for the decompressing gzip issue [skip ci] #11400

Merged
merged 3 commits into from
Aug 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ CodeCache: size=245760Kb used=236139Kb max_used=243799Kb free=9620Kb
compilation: disabled (not enough contiguous free space left)
```

It can be mitigated by increasing [ReservedCodeCacheSize](https://spark.apache.org/docs/3.3.1/building-spark.html#setting-up-mavens-memory-usage)
It can be mitigated by increasing [ReservedCodeCacheSize](https://spark.apache.org/docs/latest/building-spark.html#setting-up-mavens-memory-usage)
passed in the `MAVEN_OPTS` environment variable.

### Building a Distribution for Multiple Versions of Spark
Expand Down
3 changes: 3 additions & 0 deletions docs/download.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,9 @@ The output of signature verify:
* Improve UCX shuffle
* For updates on RAPIDS Accelerator Tools, please visit [this link](https://github.com/NVIDIA/spark-rapids-tools/releases)

Note: There is a known issue in the 24.08.1 release when decompressing gzip files on H100 GPUs.
Please find more details in [issue-16661](https://github.com/rapidsai/cudf/issues/16661).

For a detailed list of changes, please refer to the
[CHANGELOG](https://github.com/NVIDIA/spark-rapids/blob/main/CHANGELOG.md).

Expand Down
4 changes: 2 additions & 2 deletions integration_tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -387,7 +387,7 @@ test_my_new_added_case_for_sequence_operator()
### Reviewing integration tests in Spark History Server

If the integration tests are run using [run_pyspark_from_build.sh](run_pyspark_from_build.sh) we have
the [event log enabled](https://spark.apache.org/docs/3.1.1/monitoring.html) by default. You can opt
the [event log enabled](https://spark.apache.org/docs/latest/monitoring.html) by default. You can opt
out by setting the environment variable `SPARK_EVENTLOG_ENABLED` to `false`.

Compressed event logs will appear under the run directories of the form
Expand All @@ -404,7 +404,7 @@ SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=integration_tests/target/run
By default, integration tests write event logs using [Zstandard](https://facebook.github.io/zstd/)
(`zstd`) compression codec. It can be changed by setting the environment variable `PYSP_TEST_spark_eventLog_compression_codec` to one of
the SHS supported values for the config key
[`spark.eventLog.compression.codec`](https://spark.apache.org/docs/3.1.1/configuration.html#spark-ui)
[`spark.eventLog.compression.codec`](https://spark.apache.org/docs/latest/configuration.html#spark-ui)

With `zstd` it's easy to view / decompress event logs using the CLI `zstd -d [--stdout] <file>`
even without the SHS webUI.
Expand Down
Loading