Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] test_broadcast_nested_loop_join_special_case fails on databricks #441

Closed
tgravescs opened this issue Jul 27, 2020 · 3 comments · Fixed by #477
Closed

[BUG] test_broadcast_nested_loop_join_special_case fails on databricks #441

tgravescs opened this issue Jul 27, 2020 · 3 comments · Fixed by #477
Assignees
Labels
bug Something isn't working P0 Must have for release

Comments

@tgravescs
Copy link
Collaborator

Running integration tests on Databricks, I'm seeing test_broadcast_nested_loop_join_special_case fail:

FAILED src/main/python/join_test.py::test_broadcast_nested_loop_join_special_case[String][IGNORE_ORDER({'local': True})]
FAILED src/main/python/join_test.py::test_broadcast_nested_loop_join_special_case[Byte][IGNORE_ORDER({'local': True})]
FAILED src/main/python/join_test.py::test_broadcast_nested_loop_join_special_case[Short][IGNORE_ORDER({'local': True})]
FAILED src/main/python/join_test.py::test_broadcast_nested_loop_join_special_case[Integer][IGNORE_ORDER({'local': True})]
FAILED src/main/python/join_test.py::test_broadcast_nested_loop_join_special_case[Long][IGNORE_ORDER({'local': True})]
FAILED src/main/python/join_test.py::test_broadcast_nested_loop_join_special_case[Boolean][IGNORE_ORDER({'local': True})]
FAILED src/main/python/join_test.py::test_broadcast_nested_loop_join_special_case[Date][IGNORE_ORDER({'local': True})]
FAILED src/main/python/join_test.py::test_broadcast_nested_loop_join_special_case[Timestamp][IGNORE_ORDER({'local': True})]
FAILED src/main/python/join_test.py::test_broadcast_nested_loop_join_special_case[Float][IGNORE_ORDER({'local': True}), INCOMPAT]
FAILED src/main/python/join_test.py::test_broadcast_nested_loop_join_special_case[Double][IGNORE_ORDER({'local': True}), INCOMPAT]

@tgravescs tgravescs added bug Something isn't working ? - Needs Triage Need team to review and classify labels Jul 27, 2020
@sameerz sameerz added P0 Must have for release and removed ? - Needs Triage Need team to review and classify labels Jul 28, 2020
@revans2
Copy link
Collaborator

revans2 commented Jul 29, 2020

The error is

20/07/29 18:58:27 WARN TaskSetManager: Lost task 0.0 in stage 58.0 (TID 49, ip-10-59-230-224.us-west-2.compute.internal, executor driver): java.lang.ArrayIndexOutOfBoundsException: 0
	at ai.rapids.cudf.Table.<init>(Table.java:52)
	at com.nvidia.spark.rapids.GpuColumnVector.from(GpuColumnVector.java:245)
	at org.apache.spark.sql.rapids.execution.GpuBroadcastNestedLoopJoinExecBase.$anonfun$doExecuteColumnar$5(GpuBroadcastNestedLoopJoinExec.scala:235)
	at com.nvidia.spark.rapids.Arm.withResource(Arm.scala:26)
	at com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:24)
	at org.apache.spark.sql.rapids.execution.GpuBroadcastNestedLoopJoinExecBase.withResource(GpuBroadcastNestedLoopJoinExec.scala:134)
	at org.apache.spark.sql.rapids.execution.GpuBroadcastNestedLoopJoinExecBase.builtTable$lzycompute$1(GpuBroadcastNestedLoopJoinExec.scala:234)
	at org.apache.spark.sql.rapids.execution.GpuBroadcastNestedLoopJoinExecBase.builtTable$2(GpuBroadcastNestedLoopJoinExec.scala:233)
	at org.apache.spark.sql.rapids.execution.GpuBroadcastNestedLoopJoinExecBase.$anonfun$doExecuteColumnar$7(GpuBroadcastNestedLoopJoinExec.scala:249)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:844)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:844)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:356)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:320)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:356)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:320)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:356)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:320)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.doRunTask(Task.scala:144)
	at org.apache.spark.scheduler.Task.run(Task.scala:117)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$9(Executor.scala:639)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1559)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:642)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

@revans2
Copy link
Collaborator

revans2 commented Jul 29, 2020

Looks like they are doing a join where one of the sides has no columns, just rows. In this case I think we need to duplicate the table that does have rows N times where N is the number of rows in the join table with no columns.

@revans2
Copy link
Collaborator

revans2 commented Jul 29, 2020

I think to do this properly we want to put in the cudf repeat API.

pxLi pushed a commit to pxLi/spark-rapids that referenced this issue May 12, 2022
tgravescs pushed a commit to tgravescs/spark-rapids that referenced this issue Nov 30, 2023
Signed-off-by: spark-rapids automation <70000568+nvauto@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working P0 Must have for release
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants