-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Job failed with SparkUpgradeException no matter which value are set for spark.sql.parquet.datetimeRebaseModeInRead #9540
Comments
its likely set in the parquet file itself and that overrides any of the configs: This is dup of #9059 |
I also tried below but still failed each time:
|
The issue is there is no any workaround. |
Also reproduced on local env spark-3.3.2-bin-hadoop3 + spark rapids 23.08.1 |
Update:
|
Yes, because |
So currently we can't support a round trip write/read ion |
I tested all 9 combinations using Apache Spark 3.3 + spark rapids 23.08.1 between CPU vs GPU mode:
23.10 snapshot jar test:
|
In summary, we need to make sure the:
and no matter what value we set for |
Closing as dup of #9059 |
Describe the bug
Below query failed with
No matter which value we set below, it always failed with the same error:
CPU run always works fine.
Steps/Code to reproduce bug
Expected behavior
A clear and concise description of what you expected to happen.
Environment details (please complete the following information)
Databricks 10.4 ML LTS + Spark RAPIDS 23.08.2 or 23.06
Databricks 11.3 ML LTS + Spark RAPIDS 23.08.2
On-Prem Apache Spark 3.3 + Spark RAPIDS 23.08.2
The text was updated successfully, but these errors were encountered: