You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Cannot write incompatible dataset to table with schema:
table {
1: id: optional int
2: data: optional string
}
Problems:
data is out of order, before id
java.lang.IllegalArgumentException: Cannot write incompatible dataset to table with schema:
table {
1: id: optional int
2: data: optional string
}
Problems:
Result rows should match expected:<[{"id"=1,"data"="a"}, {"id"=2,"data"="b"}, {"id"=3,"data"="c"}, {"id"=4,"data"="null"}]> but was:<[{"id"=1,"data"=""}, {"id"=2,"data"=""}, {"id"=3,"data"=""}, {"id"=4,"data"="�"}]>
Expected :[{"id"=1,"data"="a"}, {"id"=2,"data"="b"}, {"id"=3,"data"="c"}, {"id"=4,"data"="null"}]
Actual :[{"id"=1,"data"=""}, {"id"=2,"data"=""}, {"id"=3,"data"=""}, {"id"=4,"data"="�"}]
this is because PartitionSpec accessors are being built out of iceberg schema. If we update the code to build the accessors from the input schema, then the re-order test case passes.
We are trying to understand if there is any specific reason to set checkOrdering to false by default and not expose it as a parameter and build the accessors in PartitionSpec from table schema instead of input schema.
And If possible , we would like to enable checkOrdering as a configurable parameter so that it can be turned off and write jobs do not have to use the same ordering as Iceberg Table.
The text was updated successfully, but these errors were encountered:
Hello, is this issue resolved? I am still getting this issue in iceberg 1.4.2 while trying to write in iceberg format to ADLS using spark-streaming.
It was actually resolved earlier but I overlooked. To ignore the column ordering, add the spark write config "spark.sql.iceberg.check-ordering" as "false". It is used in org.apache.iceberg.spark.source.SparkWriteBuilder.validateOrMergeWriteSchema(), if it is not provided, the default is true and checks for ordering.
This issue has been automatically marked as stale because it has been open for 180 days with no activity. It will be closed in next 14 days if no further activity occurs. To permanently prevent this issue from being considered stale, add the label 'not-stale', but commenting on the issue is preferred when possible.
For this test case, https://github.com/apache/incubator-iceberg/blob/6f28abfa62838d531be4faa93273965665af933d/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java
if I replace https://github.com/apache/incubator-iceberg/blob/6f28abfa62838d531be4faa93273965665af933d/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java#L135 with
df.select("data", "id").write()
the test case fails with below error,
Cannot write incompatible dataset to table with schema:
table {
1: id: optional int
2: data: optional string
}
Problems:
java.lang.IllegalArgumentException: Cannot write incompatible dataset to table with schema:
table {
1: id: optional int
2: data: optional string
}
Problems:
However if I set checkOrdering to false in here, https://github.com/apache/incubator-iceberg/blob/949c6a98ac80acec10568070772082c1178eb739/api/src/main/java/org/apache/iceberg/types/CheckCompatibility.java
Result rows should match expected:<[{"id"=1,"data"="a"}, {"id"=2,"data"="b"}, {"id"=3,"data"="c"}, {"id"=4,"data"="null"}]> but was:<[{"id"=1,"data"=""}, {"id"=2,"data"=""}, {"id"=3,"data"=""}, {"id"=4,"data"="�"}]>
Expected :[{"id"=1,"data"="a"}, {"id"=2,"data"="b"}, {"id"=3,"data"="c"}, {"id"=4,"data"="null"}]
Actual :[{"id"=1,"data"=""}, {"id"=2,"data"=""}, {"id"=3,"data"=""}, {"id"=4,"data"="�"}]
this is because PartitionSpec accessors are being built out of iceberg schema. If we update the code to build the accessors from the input schema, then the re-order test case passes.
this is shown in below PR
#745
We are trying to understand if there is any specific reason to set checkOrdering to false by default and not expose it as a parameter and build the accessors in PartitionSpec from table schema instead of input schema.
And If possible , we would like to enable checkOrdering as a configurable parameter so that it can be turned off and write jobs do not have to use the same ordering as Iceberg Table.
The text was updated successfully, but these errors were encountered: