-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support saveAsTable for writing orc and parquet #1134
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: Thomas Graves <tgraves@apache.org>
Signed-off-by: Thomas Graves <tgraves@nvidia.com>
Signed-off-by: Thomas Graves <tgraves@nvidia.com>
Signed-off-by: Thomas Graves <tgraves@nvidia.com>
build |
revans2
approved these changes
Nov 17, 2020
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the code that was copied and pasted from spark I mostly checked to see if it matched what spark had. I am a little concerned with the amount of code that we have copied over, but it should be fine for now.
jlowe
approved these changes
Nov 17, 2020
sperlingxx
pushed a commit
to sperlingxx/spark-rapids
that referenced
this pull request
Nov 20, 2020
* start saveAsTable * Add GpuDataSource * columnar ifle format * Update to GpuFileFormat * fix typo * logging * more logging * change format parquet * fix classof * fix run to runColumnar * using original providing instance for end * remove unneeded code and pass in providers so don't calculate twice * create shim for SchemaUtils checkSchemaColumnNameDuplication Signed-off-by: Thomas Graves <tgraves@apache.org> * fix typo with checkSchemaColumnNameDuplication * fix name * fix calling * fix anothername * fix none * Fix provider vs FileFormat * split read/write tests * Write a bunch more tests for orc and parquet writing Signed-off-by: Thomas Graves <tgraves@nvidia.com> * cleanup and csv test * Add more test * Add bucket write test Signed-off-by: Thomas Graves <tgraves@nvidia.com> * remove debug logs Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Update for spark 3.1.0
nartal1
pushed a commit
to nartal1/spark-rapids
that referenced
this pull request
Jun 9, 2021
* start saveAsTable * Add GpuDataSource * columnar ifle format * Update to GpuFileFormat * fix typo * logging * more logging * change format parquet * fix classof * fix run to runColumnar * using original providing instance for end * remove unneeded code and pass in providers so don't calculate twice * create shim for SchemaUtils checkSchemaColumnNameDuplication Signed-off-by: Thomas Graves <tgraves@apache.org> * fix typo with checkSchemaColumnNameDuplication * fix name * fix calling * fix anothername * fix none * Fix provider vs FileFormat * split read/write tests * Write a bunch more tests for orc and parquet writing Signed-off-by: Thomas Graves <tgraves@nvidia.com> * cleanup and csv test * Add more test * Add bucket write test Signed-off-by: Thomas Graves <tgraves@nvidia.com> * remove debug logs Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Update for spark 3.1.0
nartal1
pushed a commit
to nartal1/spark-rapids
that referenced
this pull request
Jun 9, 2021
* start saveAsTable * Add GpuDataSource * columnar ifle format * Update to GpuFileFormat * fix typo * logging * more logging * change format parquet * fix classof * fix run to runColumnar * using original providing instance for end * remove unneeded code and pass in providers so don't calculate twice * create shim for SchemaUtils checkSchemaColumnNameDuplication Signed-off-by: Thomas Graves <tgraves@apache.org> * fix typo with checkSchemaColumnNameDuplication * fix name * fix calling * fix anothername * fix none * Fix provider vs FileFormat * split read/write tests * Write a bunch more tests for orc and parquet writing Signed-off-by: Thomas Graves <tgraves@nvidia.com> * cleanup and csv test * Add more test * Add bucket write test Signed-off-by: Thomas Graves <tgraves@nvidia.com> * remove debug logs Signed-off-by: Thomas Graves <tgraves@nvidia.com> * Update for spark 3.1.0
tgravescs
pushed a commit
to tgravescs/spark-rapids
that referenced
this pull request
Nov 30, 2023
…IDIA#1134) Signed-off-by: spark-rapids automation <70000568+nvauto@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This adds support for saveAsTable and create table from select sql statements.
This is basically just metadata operations and then calling into the existing GpuInsertIntoHadoopFsRelationCommand.
I ended up copying the Spark DataSource and making a GpuDatasource version, that is a slightly modified version. One of the main changes is we pass in the provider and the fileformat because we already have to figure that out in the GpuOverrides CreateDataSourceTableAsSelectCommandMeta.
Other than that I split the parquet and orc tests into read and write files. I added a bunch of different write tests
I had to add a shim function because https://issues.apache.org/jira/browse/SPARK-32431 went into Spark 3.1 and that changed the call into checkColumnNameDuplication.
Also note that Spark hasn't enabled any of the datasource v2 writers, it always falls back to the v1 version at this point.
fixes #1096