Skip to content

Commit

Permalink
Remove the xfail for parquet test_read_merge_schema on Databricks (NV…
Browse files Browse the repository at this point in the history
…IDIA#597)

* Remove the xfail for parquet test_read_merge_schema because it works now
with changes to FileSourcescan

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* remove unneeded import

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

Co-authored-by: Thomas Graves <tgraves@nvidia.com>
  • Loading branch information
tgravescs and tgravescs authored Aug 21, 2020
1 parent 3e7b0db commit 0a56063
Showing 1 changed file with 0 additions and 3 deletions.
3 changes: 0 additions & 3 deletions integration_tests/src/main/python/parquet_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@
import pytest

from asserts import assert_gpu_and_cpu_are_equal_collect, assert_gpu_and_cpu_writes_are_equal_collect, assert_gpu_fallback_collect
from conftest import is_databricks_runtime
from datetime import date, datetime, timezone
from data_gen import *
from marks import *
Expand Down Expand Up @@ -157,8 +156,6 @@ def test_simple_partitioned_read(spark_tmp_path, v1_enabled_list):
lambda spark : spark.read.parquet(data_path),
conf={'spark.sql.sources.useV1SourceList': v1_enabled_list})

@pytest.mark.xfail(condition=is_databricks_runtime(),
reason='https://github.com/NVIDIA/spark-rapids/issues/192')
@pytest.mark.parametrize('v1_enabled_list', ["", "parquet"])
def test_read_merge_schema(spark_tmp_path, v1_enabled_list):
# Once https://github.com/NVIDIA/spark-rapids/issues/133 and https://github.com/NVIDIA/spark-rapids/issues/132 are fixed
Expand Down

0 comments on commit 0a56063

Please sign in to comment.