-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow for GPUCoalesceBatch to deal with Map #1052
Conversation
sql-plugin/src/main/java/com/nvidia/spark/rapids/GpuColumnVector.java
Outdated
Show resolved
Hide resolved
--conf "spark.driver.extraJavaOptions=-Duser.timezone=GMT $COVERAGE_SUBMIT_FLAGS" \ | ||
--conf 'spark.executor.extraJavaOptions=-Duser.timezone=GMT' \ | ||
--conf "spark.driver.extraJavaOptions=-ea -Duser.timezone=GMT $COVERAGE_SUBMIT_FLAGS" \ | ||
--conf 'spark.executor.extraJavaOptions=-ea -Duser.timezone=GMT' \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are not needed just added them in to verify that I didn't get something wrong.
Signed-off-by: Robert (Bobby) Evans <bobby@apache.org>
037c80f
to
2b9d7f3
Compare
build |
sql-plugin/src/main/java/com/nvidia/spark/rapids/GpuColumnVector.java
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Besides the nits you pointed out yourself and one minor naming nit, this looks good to me. Approving..
build |
Signed-off-by: Robert (Bobby) Evans <bobby@apache.org>
Signed-off-by: Robert (Bobby) Evans <bobby@apache.org>
Signed-off-by: Robert (Bobby) Evans <bobby@apache.org>
…IDIA#1052) Signed-off-by: spark-rapids automation <70000568+nvauto@users.noreply.github.com>
This allows GPUCoalesceBatch to work on Maps that have been explicitly allowed in other parts of the code. This is just the first step of several to allow nested types. This is because Spark has more data types than cudf does so there is not a clear one to one mapping, which we have relied on prior to this.