Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Fix Arrow-FS parquet reader for larger files #17099

Open
wants to merge 4 commits into
base: branch-24.12
Choose a base branch
from

Conversation

rjzamora
Copy link
Member

Description

Follow-up to #16684

There is currently a bug in dask_cudf.read_parquet(..., filesystem="arrow") when the files are larger than the "dataframe.parquet.minimum-partition-size" config. More specifically, when the files are not aggregated together, the output will be pd.DataFrame instead of cudf.DataFrame.

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

@rjzamora rjzamora added bug Something isn't working 2 - In Progress Currently a work in progress dask Dask issue non-breaking Non-breaking change labels Oct 16, 2024
@rjzamora rjzamora self-assigned this Oct 16, 2024
@github-actions github-actions bot added the Python Affects Python cuDF API. label Oct 16, 2024
@rjzamora rjzamora added 3 - Ready for Review Ready for review by team and removed 2 - In Progress Currently a work in progress labels Oct 17, 2024
@rjzamora rjzamora marked this pull request as ready for review October 17, 2024 19:15
@rjzamora rjzamora requested a review from a team as a code owner October 17, 2024 19:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3 - Ready for Review Ready for review by team bug Something isn't working dask Dask issue non-breaking Non-breaking change Python Affects Python cuDF API.
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

1 participant