Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] Binary type encoding for Parquet writes #10778

Closed
jlowe opened this issue May 3, 2022 · 8 comments
Closed

[FEA] Binary type encoding for Parquet writes #10778

jlowe opened this issue May 3, 2022 · 8 comments
Assignees
Labels
cuIO cuIO issue feature request New feature or request libcudf Affects libcudf (C++/CUDA) code. Spark Functionality that helps Spark RAPIDS

Comments

@jlowe
Copy link
Member

jlowe commented May 3, 2022

Is your feature request related to a problem? Please describe.
Apache Spark supports writing binary data to Parquet as Parquet's binary (BYTE_ARRAY) type. The RAPIDS Accelerator needs to emulate this behavior for binary data in order to accelerate Parquet writes containing binary types.

Describe the solution you'd like
The cuio column_metadata already contains a commented-out field that would indicate whether the data should be encoded as a binary type. Incoming data that is either a string or a LIST of INT8/UINT8 would be encoded as Parquet binary if this flag is set.

Describe alternatives you've considered
cuio could infer that a column that is a LIST of INT8/UINT8 should be written as binary, but there are some use-cases that would break with that assumption. For example, if someone used a Spark type of ArrayType(ByteType) then it would appear in cudf as LIST of INT8, but that should not be binary-encoded.

@jlowe jlowe added feature request New feature or request Needs Triage Need team to review and classify libcudf Affects libcudf (C++/CUDA) code. cuIO cuIO issue Spark Functionality that helps Spark RAPIDS labels May 3, 2022
@devavret
Copy link
Contributor

I cannot find in the parquet spec, how to distinguish between string and binary type when the physical type is BYTE_ARRAY. There is no binary type in the LogicalType or ConvertedType structs.

This means you can just encode your current data as a binary type and pass it as a string column. You'd have to pass this type information to the reader on your own.

Do you have any parquet files I can look at that contain binary data.

@jlowe
Copy link
Member Author

jlowe commented May 18, 2022

Here's a sample Parquet file for reference, written by Apache Spark, that contains a string column, a list-of-bytes column, and a binary column.

string-list-binary.parquet.zip

It's clear from the metadata that Spark is encoding extra metadata information to distinguish strings from raw binary types, but otherwise from the raw Parquet metadata they appear to be identical. Here's the output of the metadata from Parquet's CLI tool:

File path:  /tmp/string-list-binary.parquet
Created by: parquet-mr version 1.12.2 (build 77e30c8093386ec52c3cfa6c34b7ef3321322c94)
Properties:
                   org.apache.spark.version: 3.2.1
  org.apache.spark.sql.parquet.row.metadata: {"type":"struct","fields":[{"name":"string","type":"string","nullable":true,"metadata":{}},{"name":"list","type":{"type":"array","elementType":"byte","containsNull":true},"nullable":true,"metadata":{}},{"name":"binary","type":"binary","nullable":true,"metadata":{}}]}
Schema:
message spark_schema {
  optional binary string (STRING);
  optional group list (LIST) {
    repeated group list {
      optional int32 element (INTEGER(8,true));
    }
  }
  optional binary binary;
}


Row group 0:  count: 1  125.00 B records  start: 4  total: 125 B
--------------------------------------------------------------------------------
                   type      encodings count     avg size   nulls   min / max
string             BINARY    S   _     1         38.00 B    0       "123" / "123"
list.list.element  INT32     S   _     3         16.33 B    0       "1" / "3"
binary             BINARY    S   _     1         38.00 B    0       "0x010203" / "0x010203"

I agree this is not a feature requirement of the libcudf writer, and instead is more of a requirement of the high-level reader code to reinterpret string data as binary data (or vice-versa) based on what schema is expected.

@jlowe jlowe closed this as completed May 18, 2022
@revans2
Copy link
Contributor

revans2 commented May 19, 2022

In the SchemaElement the LogicalType and the ConvertedType are both optional. For a BINARY the Type is BYTE_ARRAY and neither LogicalType not ConvertedType are set.

https://github.com/apache/parquet-mr/blob/a2da156b251d13bce1fa81eb95b555da04880bc1/parquet-column/src/main/java/org/apache/parquet/schema/PrimitiveType.java#L608-L625

Prints out the type for the Schema in the output above. If there is a LogicalTypeAnnotation, The java APIs combined LogicalType and ConvertedType, then it is printed out in parentheses. This happens for optional binary string (STRING);, but not for optional binary binary;

This is the difference. If we write the binary columns out as strings they will have the StringType LogicalType and the UTF8 ConvertedType. Not sure what happens with Spark if it sees this mismatch between the schema it wrote out and the one in parquet. It will probably just auto cast it back to binary, but for anything that does not read the Spark metadata schema they might interpret it incorrectly.

Technically for reads we should be looking at this metadata too and only return a DType.STRING if UTF8/StringType is set, but older versions of parquet didn't use that so we need a flag to say if you care about it or not.

@revans2 revans2 reopened this May 19, 2022
@revans2
Copy link
Contributor

revans2 commented May 19, 2022

Actually I need to correct it a bit, at least for what Spark does on reads.

BINARY with an annotation of STRING, ENUM, UUID, and JSON should also be returned as DType.STRING.
BINARY with no LogicalAnnotation or BSON are returned as Spark's binary type.

@devavret
Copy link
Contributor

Ok so you want parquet writer to not write any converted type or logical type when you pass the column option

// bool _output_as_binary = false;
.

@github-actions
Copy link

This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.

@sameerz
Copy link
Contributor

sameerz commented Jul 25, 2022

Still needed.

rapids-bot bot pushed a commit that referenced this issue Jul 29, 2022
There are a couple of issues(#11044 and #10778) revolving around adding support for binary writes and reads to parquet. The desire is to be able to write strings and lists of int8 values as binary. This PR adds support for strings to be written as binary and for binary data to be read as binary or strings. I have left the default for binary data to read as a string to prevent any surprises upon upgrade.

Single-depth list columns of int8 and uint8 values are not written as binary with this change. That will be another PR after discussions about the possible impact of the change.

Closes #11044 
Issue #10778

Authors:
  - Mike Wilson (https://github.com/hyperbolic2346)

Approvers:
  - Karthikeyan (https://github.com/karthikeyann)
  - MithunR (https://github.com/mythrocks)
  - Yunsong Wang (https://github.com/PointKernel)
  - Vukasin Milovanovic (https://github.com/vuule)
  - https://github.com/nvdbaranec
  - Vyas Ramasubramani (https://github.com/vyasr)

URL: #11160
@hyperbolic2346
Copy link
Contributor

fixed by #11328 and #11160

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cuIO cuIO issue feature request New feature or request libcudf Affects libcudf (C++/CUDA) code. Spark Functionality that helps Spark RAPIDS
Projects
None yet
Development

No branches or pull requests

6 participants