Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-26151][SQL] Return partial results for bad CSV records #23120

Closed
wants to merge 3 commits into from

Conversation

MaxGekk
Copy link
Member

@MaxGekk MaxGekk commented Nov 22, 2018

What changes were proposed in this pull request?

In the PR, I propose to change behaviour of UnivocityParser and FailureSafeParser, and return all fields that were parsed and converted to expected types successfully instead of just returning a row with all nulls for a bad input in the PERMISSIVE mode. For example, for CSV line 0,2013-111-11 12:13:14 and DDL schema a int, b timestamp, new result is Row(0, null).

How was this patch tested?

It was checked by existing tests from CsvSuite and CsvFunctionsSuite.

@MaxGekk
Copy link
Member Author

MaxGekk commented Nov 22, 2018

@HyukjinKwon @cloud-fan Please, take a look at the PR.

@SparkQA
Copy link

SparkQA commented Nov 23, 2018

Test build #99201 has finished for PR 23120 at commit 8f2d69d.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

MaxGekk referenced this pull request Nov 23, 2018
## What changes were proposed in this pull request?

In the PR,  I propose new options for CSV datasource - `lineSep` similar to Text and JSON datasource. The option allows to specify custom line separator of maximum length of 2 characters (because of a restriction in `uniVocity` parser). New option can be used in reading and writing CSV files.

## How was this patch tested?

Added a few tests with custom `lineSep` for enabled/disabled `multiLine` in read as well as tests in write. Also I added roundtrip tests.

Closes #23080 from MaxGekk/csv-line-sep.

Lead-authored-by: Maxim Gekk <max.gekk@gmail.com>
Co-authored-by: Maxim Gekk <maxim.gekk@databricks.com>
Signed-off-by: hyukjinkwon <gurwls223@apache.org>
@MaxGekk
Copy link
Member Author

MaxGekk commented Nov 27, 2018

@HyukjinKwon Please, review the PR.

@MaxGekk
Copy link
Member Author

MaxGekk commented Nov 28, 2018

@cloud-fan May I ask you to take a look at the PR.

// we just need to convert the tokens that correspond to the required columns.
var badRecordException: Option[Throwable] = None
var i = 0
while (i < requiredSchema.length) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we stop parsing when we hit the first exception?

Copy link
Member Author

@MaxGekk MaxGekk Nov 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but we will lose field values that could be converted successfully after the exception.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it's doable for CSV, as the tokens are separated ahead, and we can keep parsing after an exception. Is it also doable for other text based data sources?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It depends on what kind of error we face to. If a parser is still in normal state and ready to continue, we could skip current error. In case of JSON, we parse input in stream fashion, and convert values to desired type on the fly. If JacksonParser is able to recognize next token, why we should stop on the first error?

}
resultRow(corruptFieldIndex.get) = badRecord()
resultRow
(row, badRecord) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

without this change in FailureSafeParser, does JSON support returning partial result?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now JSON does not support this. Need additional changes in JacksonParser to return partial results.

@cloud-fan
Copy link
Contributor

retest this please

@SparkQA
Copy link

SparkQA commented Dec 2, 2018

Test build #99563 has finished for PR 23120 at commit 8f2d69d.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Dec 2, 2018

Test build #99572 has finished for PR 23120 at commit e09d417.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

thanks, merging to master!

@asfgit asfgit closed this in 11e5f1b Dec 3, 2018
@cloud-fan
Copy link
Contributor

Hi @MaxGekk , since this changes the result(although makes it better), do you mind adding a migration guide? thanks!

@MaxGekk
Copy link
Member Author

MaxGekk commented Dec 5, 2018

The PR #23235 updates the sql migration guide

@HyukjinKwon
Copy link
Member

a late LGTM as well

asfgit pushed a commit that referenced this pull request Dec 5, 2018
## What changes were proposed in this pull request?

Updated SQL migration guide according to changes in #23120

Closes #23235 from MaxGekk/failuresafe-partial-result-followup.

Lead-authored-by: Maxim Gekk <maxim.gekk@databricks.com>
Co-authored-by: Maxim Gekk <max.gekk@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
jackylee-ch pushed a commit to jackylee-ch/spark that referenced this pull request Feb 18, 2019
## What changes were proposed in this pull request?

In the PR, I propose to change behaviour of `UnivocityParser` and `FailureSafeParser`, and return all fields that were parsed and converted to expected types successfully instead of just returning a row with all `null`s for a bad input in the `PERMISSIVE` mode. For example, for CSV line `0,2013-111-11 12:13:14` and DDL schema `a int, b timestamp`, new result is `Row(0, null)`.

## How was this patch tested?

It was checked by existing tests from `CsvSuite` and `CsvFunctionsSuite`.

Closes apache#23120 from MaxGekk/failuresafe-partial-result.

Authored-by: Maxim Gekk <max.gekk@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
jackylee-ch pushed a commit to jackylee-ch/spark that referenced this pull request Feb 18, 2019
## What changes were proposed in this pull request?

Updated SQL migration guide according to changes in apache#23120

Closes apache#23235 from MaxGekk/failuresafe-partial-result-followup.

Lead-authored-by: Maxim Gekk <maxim.gekk@databricks.com>
Co-authored-by: Maxim Gekk <max.gekk@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
@MaxGekk MaxGekk deleted the failuresafe-partial-result branch August 17, 2019 13:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants