Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ballista context should get file metadata from scheduler, not from local disk #22

Open
andygrove opened this issue May 15, 2021 · 3 comments
Labels
enhancement New feature or request

Comments

@andygrove
Copy link
Member

Is your feature request related to a problem or challenge? Please describe what you are trying to do.
I have a Ballista cluster running, and each scheduler and executor has access to TPC-H data locally.
I am running the benchmark client on my desktop, and I do not have access to the data locally.
Query planning fails with "file not found" because BallistaContext::read_parquet is looking for the file on the local file system when it should be getting the file metadata from a scheduler in the cluster.

Describe the solution you'd like
The context should send a gRPC request to the scheduler to get the necessary metadata.

Describe alternatives you've considered
None

Additional context
None

@rdettai
Copy link
Contributor

rdettai commented Sep 3, 2021

@andygrove as the client is handling the logical plan, I think it does not need to know about the list of files or the statistics, it only needs the schema:

  • with the current datafusion implementation, we could just build a table provider without any statistics on the client, and then load the statistics once the logical plan is deserialized on the scheduler (cost based optimizations would be ineffective on the client but that is not a big issue as we could run them on the scheduler instead)
  • in Moving cost based optimizations to physical planning datafusion#962 I am proposing a change that would move completely the statistics from the logical plan to the physical plan

As flight already has an endpoint to query the schema, this would avoid creating and maintaining a new one 😃

@yahoNanJing
Copy link
Contributor

Hi @andygrove, we have integrated ballista with HDFS support. Our workaround is to make the file path self described. For example, a local file path should be file://tmp/..., a hdfs file path should hdfs://localhost:xxx:/tmp/...

To make it work, we also changed the object store api a bit. Later I'll create a PR for this.

@avantgardnerio
Copy link
Contributor

Later I'll create a PR for this.

@yahoNanJing this intersects work I'm currently working on, so anything you could share would be helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants