Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

330db build failure "GpuArrowPythonRunner.scala:80: not found: type WriterThread" #10228

Closed
gerashegalov opened this issue Jan 19, 2024 · 1 comment · Fixed by #10232
Closed
Assignees
Labels
audit_4.0.0 Audit related tasks for 4.0.0

Comments

@gerashegalov
Copy link
Collaborator

gerashegalov commented Jan 19, 2024

It looks like a recent Databricks 11.3 update pulled in
[SPARK-44705][PYTHON] Make PythonRunner single-threaded

[ERROR] /home/ubuntu/spark-rapids/sql-plugin/src/main/spark311/scala/org/apache/spark/sql/rapids/execution/python/shims/GpuArrowPythonRunner.scala:80: not found: type WriterThread
[ERROR]       context: TaskContext): WriterThread = {
[ERROR]                              ^
[ERROR] /home/ubuntu/spark-rapids/sql-plugin/src/main/spark311/scala/org/apache/spark/sql/rapids/execution/python/shims/GpuArrowPythonRunner.scala:81: not found: type WriterThread
[ERROR]     new WriterThread(env, worker, inputIterator, partitionIndex, context) {
[ERROR]         ^
[ERROR] /home/ubuntu/spark-rapids/sql-plugin/src/main/spark311/scala/org/apache/spark/sql/rapids/execution/python/shims/GpuCoGroupedArrowPythonRunner.scala:84: not found: type WriterThread
[ERROR]       context: TaskContext): WriterThread = {
[ERROR]                              ^
[ERROR] /home/ubuntu/spark-rapids/sql-plugin/src/main/spark311/scala/org/apache/spark/sql/rapids/execution/python/shims/GpuCoGroupedArrowPythonRunner.scala:85: not found: type WriterThread
[ERROR]     new WriterThread(env, worker, inputIterator, partitionIndex, context) {
[ERROR]         ^
[ERROR] /home/ubuntu/spark-rapids/sql-plugin/src/main/spark320/scala/org/apache/spark/sql/rapids/execution/python/shims/GpuArrowPythonOutput.scala:60: not found: type WriterThread
[ERROR]       writerThread: WriterThread,
[ERROR]                     ^
[ERROR] /home/ubuntu/spark-rapids/sql-plugin/src/main/spark320/scala/org/apache/spark/sql/rapids/execution/python/shims/GpuArrowPythonOutput.scala:68: type mismatch;
 found   : java.net.Socket
 required: org.apache.spark.api.python.PythonWorker
[ERROR]     new ReaderIterator(stream, writerThread, startTime, env, worker, pid, releasedOrClosed,
[ERROR]                                                              ^
[INFO] java.net.Socket <: org.apache.spark.api.python.PythonWorker?
[INFO] false
[ERROR] /home/ubuntu/spark-rapids/sql-plugin/src/main/spark321db/scala/org/apache/spark/sql/rapids/execution/python/shims/GpuGroupUDFArrowPythonRunner.scala:71: not found: type WriterThread
[ERROR]       context: TaskContext): WriterThread = {
[ERROR]                              ^
[ERROR] /home/ubuntu/spark-rapids/sql-plugin/src/main/spark321db/scala/org/apache/spark/sql/rapids/execution/python/shims/GpuGroupUDFArrowPythonRunner.scala:72: not found: type WriterThread
[ERROR]     new WriterThread(env, worker, inputIterator, partitionIndex, context) {
[ERROR]         ^
[ERROR] 8 errors found

presumably all spark-rapids Plugin releases (23.02+) supporting DBR 11.3 are affected

Originally posted by @gerashegalov in #10225 (comment)

@gerashegalov gerashegalov added ? - Needs Triage Need team to review and classify audit_4.0.0 Audit related tasks for 4.0.0 labels Jan 19, 2024
@tgravescs
Copy link
Collaborator

we have made changes similar to this in our 341db shim. I think this is the PR they were introduced: 54145c4

gerashegalov pushed a commit that referenced this issue Jan 20, 2024
)

This PR removes the old 330db shims in favor of the new Shims, similar to the one in 341db. 

**Tests:**
Ran udf_test.py on Databricks 11.3 and they all passed. 

fixes #10228 

---------

Signed-off-by: raza jafri <rjafri@nvidia.com>
@sameerz sameerz removed the ? - Needs Triage Need team to review and classify label Jan 25, 2024
razajafri added a commit to razajafri/spark-rapids that referenced this issue Jan 25, 2024
…DIA#10232)

This PR removes the old 330db shims in favor of the new Shims, similar to the one in 341db. 

**Tests:**
Ran udf_test.py on Databricks 11.3 and they all passed. 

fixes NVIDIA#10228 

---------

Signed-off-by: raza jafri <rjafri@nvidia.com>
razajafri added a commit that referenced this issue Jan 26, 2024
* Download Maven from apache.org archives (#10225)

Fixes #10224 

Replace broken install using apt by downloading Maven from apache.org.

Signed-off-by: Gera Shegalov <gera@apache.org>

* Fix a hang for Pandas UDFs on DB 13.3[databricks] (#9833)

fix #9493
fix #9844

The python runner uses two separate threads to write and read data with Python processes, 
however on DB13.3, it becomes single-threaded, which means reading and writing run on the same thread.
Now the first reading is always ahead of the first writing. But the original BatchQueue will wait
on the first reading until the first writing is done. Then it will wait forever.

Change made:

- Update the BatchQueue to support asking for a batch instead of waiting unitl one is inserted into the queue. 
   This can eliminate the order requirement of reading and writing.
- Introduce a new class named BatchProducer to work with the new BatchQueue to support rows number
   peek on demand for the reading.
- Apply this new BatchQueue to relevant plans.
- Update the Python runners to support writing one batch one time for the singled-threaded model.
- Found an issue about PythonUDAF and RunningWindoFunctionExec, it may be a bug specific to DB 13.3,
   and add a test (test_window_aggregate_udf_on_cpu) for it.
- Other small refactors
---------

Signed-off-by: Firestarman <firestarmanllc@gmail.com>

* Fix a potential data corruption for Pandas UDF (#9942)

This PR moves the BatchQueue into the DataProducer to share the same lock as the output iterator
returned by asIterator,  and make the batch movement from the input iterator to the batch queue be
an atomic operation to eliminate the race when appending the batches to the queue.

* Do some refactor for the Python UDF code to try to reduce duplicate code. (#9902)

Signed-off-by: Firestarman <firestarmanllc@gmail.com>

* Fixed 330db Shims to Adopt the PythonRunner Changes [databricks] (#10232)

This PR removes the old 330db shims in favor of the new Shims, similar to the one in 341db. 

**Tests:**
Ran udf_test.py on Databricks 11.3 and they all passed. 

fixes #10228 

---------

Signed-off-by: raza jafri <rjafri@nvidia.com>

---------

Signed-off-by: Gera Shegalov <gera@apache.org>
Signed-off-by: Firestarman <firestarmanllc@gmail.com>
Signed-off-by: raza jafri <rjafri@nvidia.com>
Co-authored-by: Gera Shegalov <gera@apache.org>
Co-authored-by: Liangcai Li <firestarmanllc@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
audit_4.0.0 Audit related tasks for 4.0.0
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants