Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable regular expressions on GPU by default [databricks] #4740

Merged
merged 9 commits into from
Feb 10, 2022
15 changes: 4 additions & 11 deletions docs/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -499,18 +499,11 @@ The following Apache Spark regular expression functions and expressions are supp
- `regexp_like`
- `regexp_replace`

These operations are disabled by default because of known incompatibilities between the Java regular expression
engine that Spark uses and the cuDF regular expression engine on the GPU, and also because the regular expression
kernels can potentially have high memory overhead.
Regular expression evaluation on the GPU can potentially have high memory overhead and cause out-of-memory errors. To
disable regular expressions on the GPU, set `spark.rapids.sql.regexp.enabled=false`.

These operations can be enabled on the GPU with the following configuration settings:

- `spark.rapids.sql.expression.RLike=true` (for `RLIKE`, `regexp`, and `regexp_like`)
- `spark.rapids.sql.expression.RegExpReplace=true` for `regexp_replace`
- `spark.rapids.sql.expression.RegExpExtract=true` for `regexp_extract`

Even when these expressions are enabled, there are instances where regular expression operations will fall back to
CPU when the RAPIDS Accelerator determines that a pattern is either unsupported or would produce incorrect results on the GPU.
There are instances where regular expression operations will fall back to CPU when the RAPIDS Accelerator determines
that a pattern is either unsupported or would produce incorrect results on the GPU.

Here are some examples of regular expression patterns that are not supported on the GPU and will fall back to the CPU.

Expand Down
7 changes: 4 additions & 3 deletions docs/configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,7 @@ Name | Description | Default Value
<a name="sql.python.gpu.enabled"></a>spark.rapids.sql.python.gpu.enabled|This is an experimental feature and is likely to change in the future. Enable (true) or disable (false) support for scheduling Python Pandas UDFs with GPU resources. When enabled, pandas UDFs are assumed to share the same GPU that the RAPIDs accelerator uses and will honor the python GPU configs|false
<a name="sql.reader.batchSizeBytes"></a>spark.rapids.sql.reader.batchSizeBytes|Soft limit on the maximum number of bytes the reader reads per batch. The readers will read chunks of data until this limit is met or exceeded. Note that the reader may estimate the number of bytes that will be used on the GPU in some cases based on the schema and number of rows in each batch.|2147483647
<a name="sql.reader.batchSizeRows"></a>spark.rapids.sql.reader.batchSizeRows|Soft limit on the maximum number of rows the reader will read per batch. The orc and parquet readers will read row groups until this limit is met or exceeded. The limit is respected by the csv reader.|2147483647
<a name="sql.regexp.enabled"></a>spark.rapids.sql.regexp.enabled|Specifies whether regular expressions should be evaluated on GPU. Complex expressions can cause out of memory issues. Setting this config to false will make any operation using regular expressions fall back to CPU.|true
<a name="sql.replaceSortMergeJoin.enabled"></a>spark.rapids.sql.replaceSortMergeJoin.enabled|Allow replacing sortMergeJoin with HashJoin|true
<a name="sql.rowBasedUDF.enabled"></a>spark.rapids.sql.rowBasedUDF.enabled|When set to true, optimizes a row-based UDF in a GPU operation by transferring only the data it needs between GPU and CPU inside a query operation, instead of falling this operation back to CPU. This is an experimental feature, and this config might be removed in the future.|false
<a name="sql.shuffle.spillThreads"></a>spark.rapids.sql.shuffle.spillThreads|Number of threads used to spill shuffle data to disk in the background.|6
Expand Down Expand Up @@ -265,11 +266,11 @@ Name | SQL Function(s) | Description | Default Value | Notes
<a name="sql.expression.PromotePrecision"></a>spark.rapids.sql.expression.PromotePrecision| |PromotePrecision before arithmetic operations between DecimalType data|true|None|
<a name="sql.expression.PythonUDF"></a>spark.rapids.sql.expression.PythonUDF| |UDF run in an external python process. Does not actually run on the GPU, but the transfer of data to/from it can be accelerated|true|None|
<a name="sql.expression.Quarter"></a>spark.rapids.sql.expression.Quarter|`quarter`|Returns the quarter of the year for date, in the range 1 to 4|true|None|
<a name="sql.expression.RLike"></a>spark.rapids.sql.expression.RLike|`rlike`|RLike|false|This is disabled by default because the implementation is not 100% compatible. See the compatibility guide for more information.|
<a name="sql.expression.RLike"></a>spark.rapids.sql.expression.RLike|`rlike`|Regular expression version of Like|true|None|
<a name="sql.expression.Rand"></a>spark.rapids.sql.expression.Rand|`random`, `rand`|Generate a random column with i.i.d. uniformly distributed values in [0, 1)|true|None|
<a name="sql.expression.Rank"></a>spark.rapids.sql.expression.Rank|`rank`|Window function that returns the rank value within the aggregation window|true|None|
<a name="sql.expression.RegExpExtract"></a>spark.rapids.sql.expression.RegExpExtract|`regexp_extract`|RegExpExtract|false|This is disabled by default because the implementation is not 100% compatible. See the compatibility guide for more information.|
<a name="sql.expression.RegExpReplace"></a>spark.rapids.sql.expression.RegExpReplace|`regexp_replace`|RegExpReplace support for string literal input patterns|false|This is disabled by default because the implementation is not 100% compatible. See the compatibility guide for more information.|
<a name="sql.expression.RegExpExtract"></a>spark.rapids.sql.expression.RegExpExtract|`regexp_extract`|Extract a specific group identified by a regular expression|true|None|
<a name="sql.expression.RegExpReplace"></a>spark.rapids.sql.expression.RegExpReplace|`regexp_replace`|String replace using a regular expression pattern|true|None|
<a name="sql.expression.Remainder"></a>spark.rapids.sql.expression.Remainder|`%`, `mod`|Remainder or modulo|true|None|
<a name="sql.expression.ReplicateRows"></a>spark.rapids.sql.expression.ReplicateRows| |Given an input row replicates the row N times|true|None|
<a name="sql.expression.Rint"></a>spark.rapids.sql.expression.Rint|`rint`|Rounds up a double value to the nearest double equal to an integer|true|None|
Expand Down
12 changes: 6 additions & 6 deletions docs/supported_ops.md
Original file line number Diff line number Diff line change
Expand Up @@ -9849,8 +9849,8 @@ are limited.
<tr>
<td rowSpan="3">RLike</td>
<td rowSpan="3">`rlike`</td>
<td rowSpan="3">RLike</td>
<td rowSpan="3">This is disabled by default because the implementation is not 100% compatible. See the compatibility guide for more information.</td>
<td rowSpan="3">Regular expression version of Like</td>
<td rowSpan="3">None</td>
<td rowSpan="3">project</td>
<td>str</td>
<td> </td>
Expand Down Expand Up @@ -10037,8 +10037,8 @@ are limited.
<tr>
<td rowSpan="4">RegExpExtract</td>
<td rowSpan="4">`regexp_extract`</td>
<td rowSpan="4">RegExpExtract</td>
<td rowSpan="4">This is disabled by default because the implementation is not 100% compatible. See the compatibility guide for more information.</td>
<td rowSpan="4">Extract a specific group identified by a regular expression</td>
<td rowSpan="4">None</td>
<td rowSpan="4">project</td>
<td>str</td>
<td> </td>
Expand Down Expand Up @@ -10126,8 +10126,8 @@ are limited.
<tr>
<td rowSpan="4">RegExpReplace</td>
<td rowSpan="4">`regexp_replace`</td>
<td rowSpan="4">RegExpReplace support for string literal input patterns</td>
<td rowSpan="4">This is disabled by default because the implementation is not 100% compatible. See the compatibility guide for more information.</td>
<td rowSpan="4">String replace using a regular expression pattern</td>
<td rowSpan="4">None</td>
<td rowSpan="4">project</td>
<td>str</td>
<td> </td>
Expand Down
6 changes: 2 additions & 4 deletions integration_tests/src/main/python/conditionals_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -206,8 +206,7 @@ def test_conditional_with_side_effects_cast(data_gen):
assert_gpu_and_cpu_are_equal_collect(
lambda spark : unary_op_df(spark, data_gen).selectExpr(
'IF(a RLIKE "^[0-9]{1,5}\\z", CAST(a AS INT), 0)'),
conf = {'spark.sql.ansi.enabled':True,
'spark.rapids.sql.expression.RLike': True})
conf = {'spark.sql.ansi.enabled':True})

@pytest.mark.parametrize('data_gen', [mk_str_gen('[0-9]{1,9}')], ids=idfn)
def test_conditional_with_side_effects_case_when(data_gen):
Expand All @@ -217,5 +216,4 @@ def test_conditional_with_side_effects_case_when(data_gen):
WHEN a RLIKE "^[0-9]{1,3}\\z" THEN CAST(a AS INT) \
WHEN a RLIKE "^[0-9]{4,6}\\z" THEN CAST(a AS INT) + 123 \
ELSE -1 END'),
conf = {'spark.sql.ansi.enabled':True,
'spark.rapids.sql.expression.RLike': True})
conf = {'spark.sql.ansi.enabled':True})
5 changes: 2 additions & 3 deletions integration_tests/src/main/python/qa_nightly_select_test.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (c) 2020, NVIDIA CORPORATION.
# Copyright (c) 2020-2022, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -147,8 +147,7 @@ def idfn(val):
'spark.rapids.sql.hasNans': 'false',
'spark.rapids.sql.castStringToFloat.enabled': 'true',
'spark.rapids.sql.castFloatToIntegralTypes.enabled': 'true',
'spark.rapids.sql.castFloatToString.enabled': 'true',
'spark.rapids.sql.expression.RegExpReplace': 'true'
'spark.rapids.sql.castFloatToString.enabled': 'true'
}

_first_last_qa_conf = copy_and_update(_qa_conf, {
Expand Down
Loading