Skip to content

Commit

Permalink
[SPARK-32268][SQL] Row-level Runtime Filtering
Browse files Browse the repository at this point in the history
* [SPARK-32268][SQL] Row-level Runtime Filtering

This PR proposes row-level runtime filters in Spark to reduce intermediate data volume for operators like shuffle, join and aggregate, and hence improve performance. We propose two mechanisms to do this: semi-join filters or bloom filters, and both mechanisms are proposed to co-exist side-by-side behind feature configs.
[Design Doc](https://docs.google.com/document/d/16IEuyLeQlubQkH8YuVuXWKo2-grVIoDJqQpHZrE7q04/edit?usp=sharing) with more details.

With Semi-Join, we see 9 queries improve for the TPC DS 3TB benchmark, and no regressions.
With Bloom Filter, we see 10 queries improve for the TPC DS 3TB benchmark, and no regressions.

No

Added tests

Closes apache#35789 from somani/rf.

Lead-authored-by: Abhishek Somani <abhishek.somani@databricks.com>
Co-authored-by: Abhishek Somani <abh.somani@gmail.com>
Co-authored-by: Yuming Wang <yumwang@ebay.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
(cherry picked from commit 1f4e4c8)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-32268][TESTS][FOLLOWUP] Fix `BloomFilterAggregateQuerySuite` failed in ansi mode

`Test that might_contain errors out non-constant Bloom filter` in `BloomFilterAggregateQuerySuite ` failed in ansi mode due to `Numeric <=> Binary` is [not allowed in ansi mode](apache#30260),  so the content of  `exception.getMessage` is different from that of non-ans mode.

This pr change the case to ensure that the error messages of `ansi` mode and `non-ansi` are consistent.

Bug fix.

No

- Pass GA
- Local Test

**Before**

```
export SPARK_ANSI_SQL_MODE=false
mvn clean test -pl sql/core -am -Dtest=none -DwildcardSuites=org.apache.spark.sql.BloomFilterAggregateQuerySuite
```

```
Run completed in 23 seconds, 537 milliseconds.
Total number of tests run: 8
Suites: completed 2, aborted 0
Tests: succeeded 8, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
```

```
export SPARK_ANSI_SQL_MODE=true
mvn clean test -pl sql/core -am -Dtest=none -DwildcardSuites=org.apache.spark.sql.BloomFilterAggregateQuerySuite
```

```
- Test that might_contain errors out non-constant Bloom filter *** FAILED ***
  "cannot resolve 'CAST(t.a AS BINARY)' due to data type mismatch:
   cannot cast bigint to binary with ANSI mode on.
   If you have to cast bigint to binary, you can set spark.sql.ansi.enabled as false.
  ; line 2 pos 21;
  'Project [unresolvedalias('might_contain(cast(a#2424L as binary), cast(5 as bigint)), None)]
  +- SubqueryAlias t
     +- LocalRelation [a#2424L]
  " did not contain "The Bloom filter binary input to might_contain should be either a constant value or a scalar subquery expression" (BloomFilterAggregateQuerySuite.scala:171)
```

**After**
```
export SPARK_ANSI_SQL_MODE=false
mvn clean test -pl sql/core -am -Dtest=none -DwildcardSuites=org.apache.spark.sql.BloomFilterAggregateQuerySuite
```

```
Run completed in 26 seconds, 544 milliseconds.
Total number of tests run: 8
Suites: completed 2, aborted 0
Tests: succeeded 8, failed 0, canceled 0, ignored 0, pending 0
All tests passed.

```

```
export SPARK_ANSI_SQL_MODE=true
mvn clean test -pl sql/core -am -Dtest=none -DwildcardSuites=org.apache.spark.sql.BloomFilterAggregateQuerySuite
```

```
Run completed in 25 seconds, 289 milliseconds.
Total number of tests run: 8
Suites: completed 2, aborted 0
Tests: succeeded 8, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
```

Closes apache#35953 from LuciferYang/SPARK-32268-FOLLOWUP.

Authored-by: yangjie01 <yangjie01@baidu.com>
Signed-off-by: Yuming Wang <yumwang@ebay.com>
(cherry picked from commit 7165123)
Signed-off-by: Yuming Wang <yumwang@ebay.com>

* [SPARK-32268][SQL][FOLLOWUP] Add RewritePredicateSubquery below the InjectRuntimeFilter

Add `RewritePredicateSubquery` below the `InjectRuntimeFilter` in `SparkOptimizer`.

It seems if the runtime use in-subquery to do the filter, it won't be converted to semi-join as the design said.

This pr fixes the issue.

No, not released

Improve the test by adding: ensure the semi-join exists if the runtime filter use in-subquery code path.

Closes apache#35998 from ulysses-you/SPARK-32268-FOllOWUP.

Authored-by: ulysses-you <ulyssesyou18@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
(cherry picked from commit c0c52dd)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-32268][SQL][FOLLOWUP] Add ColumnPruning in injectBloomFilter

Add `ColumnPruning` in `InjectRuntimeFilter.injectBloomFilter` to optimize the BoomFilter creation query.

It seems BloomFilter subqueries injected by `InjectRuntimeFilter` will read as many columns as filterCreationSidePlan. This does not match "Only scan the required columns" as the design said. We can check this by a simple case in `InjectRuntimeFilterSuite`:
```scala
withSQLConf(SQLConf.RUNTIME_BLOOM_FILTER_ENABLED.key -> "true",
  SQLConf.RUNTIME_BLOOM_FILTER_APPLICATION_SIDE_SCAN_SIZE_THRESHOLD.key -> "3000",
  SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> "2000") {
  val query = "select * from bf1 join bf2 on bf1.c1 = bf2.c2 where bf2.a2 = 62"
  sql(query).explain()
}
```
The reason is subqueries have not been optimized by `ColumnPruning`, and this pr will fix it.

No, not released

Improve the test by adding `columnPruningTakesEffect` to check the optimizedPlan of bloom filter join.

Closes apache#36047 from Flyangz/SPARK-32268-FOllOWUP.

Authored-by: Yang Liu <yintai@xiaohongshu.com>
Signed-off-by: Yuming Wang <yumwang@ebay.com>
(cherry picked from commit c98725a)
Signed-off-by: Yuming Wang <yumwang@ebay.com>

* [SPARK-32268][SQL][TESTS][FOLLOW-UP] Use function registry in the SparkSession

This PR proposes:
1. Use the function registry in the Spark Session being used
2. Move function registration into `beforeAll`

Registration of the function without `beforeAll` at `builtin` can affect other tests. See also https://lists.apache.org/thread/jp0ccqv10ht716g9xldm2ohdv3mpmmz1.

No, test-only.

Unittests fixed.

Closes apache#36576 from HyukjinKwon/SPARK-32268-followup.

Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
(cherry picked from commit c5351f8)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
  • Loading branch information
zgzzbws authored and songzhxlh-max committed Oct 12, 2022
1 parent a685cac commit 81d33d7
Show file tree
Hide file tree
Showing 14 changed files with 1,462 additions and 16 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -163,6 +163,13 @@ int getVersionNumber() {
*/
public abstract void writeTo(OutputStream out) throws IOException;

/**
* @return the number of set bits in this {@link BloomFilter}.
*/
public long cardinality() {
throw new UnsupportedOperationException("Not implemented");
}

/**
* Reads in a {@link BloomFilter} from an input stream. It is the caller's responsibility to close
* the stream.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,11 @@ public BloomFilter intersectInPlace(BloomFilter other) throws IncompatibleMergeE
return this;
}

@Override
public long cardinality() {
return this.bits.cardinality();
}

private BloomFilterImpl checkCompatibilityForMerge(BloomFilter other)
throws IncompatibleMergeException {
// Duplicates the logic of `isCompatible` here to provide better error message.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.sql.catalyst.expressions

import java.io.ByteArrayInputStream

import org.apache.spark.sql.catalyst.InternalRow
import org.apache.spark.sql.catalyst.analysis.TypeCheckResult
import org.apache.spark.sql.catalyst.expressions.codegen.{CodegenContext, CodeGenerator, ExprCode, JavaCode, TrueLiteral}
import org.apache.spark.sql.catalyst.expressions.codegen.Block.BlockHelper
import org.apache.spark.sql.catalyst.trees.TreePattern.OUTER_REFERENCE
import org.apache.spark.sql.types._
import org.apache.spark.util.sketch.BloomFilter

/**
* An internal scalar function that returns the membership check result (either true or false)
* for values of `valueExpression` in the Bloom filter represented by `bloomFilterExpression`.
* Not that since the function is "might contain", always returning true regardless is not
* wrong.
* Note that this expression requires that `bloomFilterExpression` is either a constant value or
* an uncorrelated scalar subquery. This is sufficient for the Bloom filter join rewrite.
*
* @param bloomFilterExpression the Binary data of Bloom filter.
* @param valueExpression the Long value to be tested for the membership of `bloomFilterExpression`.
*/
case class BloomFilterMightContain(
bloomFilterExpression: Expression,
valueExpression: Expression) extends BinaryExpression {

override def nullable: Boolean = true
override def left: Expression = bloomFilterExpression
override def right: Expression = valueExpression
override def prettyName: String = "might_contain"
override def dataType: DataType = BooleanType

override def checkInputDataTypes(): TypeCheckResult = {
(left.dataType, right.dataType) match {
case (BinaryType, NullType) | (NullType, LongType) | (NullType, NullType) |
(BinaryType, LongType) =>
bloomFilterExpression match {
case e : Expression if e.foldable => TypeCheckResult.TypeCheckSuccess
case subquery : PlanExpression[_] if !subquery.containsPattern(OUTER_REFERENCE) =>
TypeCheckResult.TypeCheckSuccess
case _ =>
TypeCheckResult.TypeCheckFailure(s"The Bloom filter binary input to $prettyName " +
"should be either a constant value or a scalar subquery expression")
}
case _ => TypeCheckResult.TypeCheckFailure(s"Input to function $prettyName should have " +
s"been ${BinaryType.simpleString} followed by a value with ${LongType.simpleString}, " +
s"but it's [${left.dataType.catalogString}, ${right.dataType.catalogString}].")
}
}

override protected def withNewChildrenInternal(
newBloomFilterExpression: Expression,
newValueExpression: Expression): BloomFilterMightContain =
copy(bloomFilterExpression = newBloomFilterExpression,
valueExpression = newValueExpression)

// The bloom filter created from `bloomFilterExpression`.
@transient private lazy val bloomFilter = {
val bytes = bloomFilterExpression.eval().asInstanceOf[Array[Byte]]
if (bytes == null) null else deserialize(bytes)
}

override def eval(input: InternalRow): Any = {
if (bloomFilter == null) {
null
} else {
val value = valueExpression.eval(input)
if (value == null) null else bloomFilter.mightContainLong(value.asInstanceOf[Long])
}
}

override def doGenCode(ctx: CodegenContext, ev: ExprCode): ExprCode = {
if (bloomFilter == null) {
ev.copy(isNull = TrueLiteral, value = JavaCode.defaultLiteral(dataType))
} else {
val bf = ctx.addReferenceObj("bloomFilter", bloomFilter, classOf[BloomFilter].getName)
val valueEval = valueExpression.genCode(ctx)
ev.copy(code = code"""
${valueEval.code}
boolean ${ev.isNull} = ${valueEval.isNull};
${CodeGenerator.javaType(dataType)} ${ev.value} = ${CodeGenerator.defaultValue(dataType)};
if (!${ev.isNull}) {
${ev.value} = $bf.mightContainLong((Long)${valueEval.value});
}""")
}
}

final def deserialize(bytes: Array[Byte]): BloomFilter = {
val in = new ByteArrayInputStream(bytes)
val bloomFilter = BloomFilter.readFrom(in)
in.close()
bloomFilter
}

}
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.sql.catalyst.expressions.aggregate

import java.io.ByteArrayInputStream
import java.io.ByteArrayOutputStream

import org.apache.spark.sql.catalyst.InternalRow
import org.apache.spark.sql.catalyst.analysis.TypeCheckResult
import org.apache.spark.sql.catalyst.analysis.TypeCheckResult._
import org.apache.spark.sql.catalyst.expressions._
import org.apache.spark.sql.catalyst.trees.TernaryLike
import org.apache.spark.sql.internal.SQLConf
import org.apache.spark.sql.types._
import org.apache.spark.util.sketch.BloomFilter

/**
* An internal aggregate function that creates a Bloom filter from input values.
*
* @param child Child expression of Long values for creating a Bloom filter.
* @param estimatedNumItemsExpression The number of estimated distinct items (optional).
* @param numBitsExpression The number of bits to use (optional).
*/
case class BloomFilterAggregate(
child: Expression,
estimatedNumItemsExpression: Expression,
numBitsExpression: Expression,
override val mutableAggBufferOffset: Int,
override val inputAggBufferOffset: Int)
extends TypedImperativeAggregate[BloomFilter] with TernaryLike[Expression] {

def this(child: Expression, estimatedNumItemsExpression: Expression,
numBitsExpression: Expression) = {
this(child, estimatedNumItemsExpression, numBitsExpression, 0, 0)
}

def this(child: Expression, estimatedNumItemsExpression: Expression) = {
this(child, estimatedNumItemsExpression,
// 1 byte per item.
Multiply(estimatedNumItemsExpression, Literal(8L)))
}

def this(child: Expression) = {
this(child, Literal(SQLConf.get.getConf(SQLConf.RUNTIME_BLOOM_FILTER_EXPECTED_NUM_ITEMS)),
Literal(SQLConf.get.getConf(SQLConf.RUNTIME_BLOOM_FILTER_NUM_BITS)))
}

override def checkInputDataTypes(): TypeCheckResult = {
(first.dataType, second.dataType, third.dataType) match {
case (_, NullType, _) | (_, _, NullType) =>
TypeCheckResult.TypeCheckFailure("Null typed values cannot be used as size arguments")
case (LongType, LongType, LongType) =>
if (!estimatedNumItemsExpression.foldable) {
TypeCheckFailure("The estimated number of items provided must be a constant literal")
} else if (estimatedNumItems <= 0L) {
TypeCheckFailure("The estimated number of items must be a positive value " +
s" (current value = $estimatedNumItems)")
} else if (!numBitsExpression.foldable) {
TypeCheckFailure("The number of bits provided must be a constant literal")
} else if (numBits <= 0L) {
TypeCheckFailure("The number of bits must be a positive value " +
s" (current value = $numBits)")
} else {
require(estimatedNumItems <=
SQLConf.get.getConf(SQLConf.RUNTIME_BLOOM_FILTER_MAX_NUM_ITEMS))
require(numBits <= SQLConf.get.getConf(SQLConf.RUNTIME_BLOOM_FILTER_MAX_NUM_BITS))
TypeCheckSuccess
}
case _ => TypeCheckResult.TypeCheckFailure(s"Input to function $prettyName should have " +
s"been a ${LongType.simpleString} value followed with two ${LongType.simpleString} size " +
s"arguments, but it's [${first.dataType.catalogString}, " +
s"${second.dataType.catalogString}, ${third.dataType.catalogString}]")
}
}
override def nullable: Boolean = true

override def dataType: DataType = BinaryType

override def prettyName: String = "bloom_filter_agg"

// Mark as lazy so that `estimatedNumItems` is not evaluated during tree transformation.
private lazy val estimatedNumItems: Long =
Math.min(estimatedNumItemsExpression.eval().asInstanceOf[Number].longValue,
SQLConf.get.getConf(SQLConf.RUNTIME_BLOOM_FILTER_MAX_NUM_ITEMS))

// Mark as lazy so that `numBits` is not evaluated during tree transformation.
private lazy val numBits: Long =
Math.min(numBitsExpression.eval().asInstanceOf[Number].longValue,
SQLConf.get.getConf(SQLConf.RUNTIME_BLOOM_FILTER_MAX_NUM_BITS))

override def first: Expression = child

override def second: Expression = estimatedNumItemsExpression

override def third: Expression = numBitsExpression

override protected def withNewChildrenInternal(
newChild: Expression,
newEstimatedNumItemsExpression: Expression,
newNumBitsExpression: Expression): BloomFilterAggregate = {
copy(child = newChild, estimatedNumItemsExpression = newEstimatedNumItemsExpression,
numBitsExpression = newNumBitsExpression)
}

override def createAggregationBuffer(): BloomFilter = {
BloomFilter.create(estimatedNumItems, numBits)
}

override def update(buffer: BloomFilter, inputRow: InternalRow): BloomFilter = {
val value = child.eval(inputRow)
// Ignore null values.
if (value == null) {
return buffer
}
buffer.putLong(value.asInstanceOf[Long])
buffer
}

override def merge(buffer: BloomFilter, other: BloomFilter): BloomFilter = {
buffer.mergeInPlace(other)
}

override def eval(buffer: BloomFilter): Any = {
if (buffer.cardinality() == 0) {
// There's no set bit in the Bloom filter and hence no not-null value is processed.
return null
}
serialize(buffer)
}

override def withNewMutableAggBufferOffset(newOffset: Int): BloomFilterAggregate =
copy(mutableAggBufferOffset = newOffset)

override def withNewInputAggBufferOffset(newOffset: Int): BloomFilterAggregate =
copy(inputAggBufferOffset = newOffset)

override def serialize(obj: BloomFilter): Array[Byte] = {
BloomFilterAggregate.serialize(obj)
}

override def deserialize(bytes: Array[Byte]): BloomFilter = {
BloomFilterAggregate.deserialize(bytes)
}
}

object BloomFilterAggregate {
final def serialize(obj: BloomFilter): Array[Byte] = {
// BloomFilterImpl.writeTo() writes 2 integers (version number and num hash functions), hence
// the +8
val size = (obj.bitSize() / 8) + 8
require(size <= Integer.MAX_VALUE, s"actual number of bits is too large $size")
val out = new ByteArrayOutputStream(size.intValue())
obj.writeTo(out)
out.close()
out.toByteArray
}

final def deserialize(bytes: Array[Byte]): BloomFilter = {
val in = new ByteArrayInputStream(bytes)
val bloomFilter = BloomFilter.readFrom(in)
in.close()
bloomFilter
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -352,6 +352,8 @@ case class Invoke(

lazy val argClasses = ScalaReflection.expressionJavaClasses(arguments)

final override val nodePatterns: Seq[TreePattern] = Seq(INVOKE)

override def nullable: Boolean = targetObject.nullable || needNullCheck || returnNullable
override def children: Seq[Expression] = targetObject +: arguments
override def inputTypes: Seq[AbstractDataType] =
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -287,6 +287,22 @@ trait PredicateHelper extends AliasHelper with Logging {
}
}
}

/**
* Returns whether an expression is likely to be selective
*/
def isLikelySelective(e: Expression): Boolean = e match {
case Not(expr) => isLikelySelective(expr)
case And(l, r) => isLikelySelective(l) || isLikelySelective(r)
case Or(l, r) => isLikelySelective(l) && isLikelySelective(r)
case _: StringRegexExpression => true
case _: BinaryComparison => true
case _: In | _: InSet => true
case _: StringPredicate => true
// case BinaryPredicate(_) => true
case _: MultiLikeBase => true
case _ => false
}
}

@ExpressionDescription(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ import org.apache.spark.sql.catalyst.analysis.TypeCheckResult
import org.apache.spark.sql.catalyst.analysis.TypeCheckResult.{TypeCheckFailure, TypeCheckSuccess}
import org.apache.spark.sql.catalyst.expressions.codegen._
import org.apache.spark.sql.catalyst.expressions.codegen.Block._
import org.apache.spark.sql.catalyst.trees.TreePattern.{LIKE_FAMLIY, TreePattern}
import org.apache.spark.sql.catalyst.trees.TreePattern.{LIKE_FAMLIY, REGEXP_EXTRACT_FAMILY, REGEXP_REPLACE, TreePattern}
import org.apache.spark.sql.catalyst.util.{GenericArrayData, StringUtils}
import org.apache.spark.sql.errors.QueryExecutionErrors
import org.apache.spark.sql.types._
Expand Down Expand Up @@ -554,6 +554,7 @@ case class RegExpReplace(subject: Expression, regexp: Expression, rep: Expressio
@transient private var lastReplacementInUTF8: UTF8String = _
// result buffer write by Matcher
@transient private lazy val result: StringBuffer = new StringBuffer
final override val nodePatterns: Seq[TreePattern] = Seq(REGEXP_REPLACE)

override def nullSafeEval(s: Any, p: Any, r: Any, i: Any): Any = {
if (s.toString.indexOf('$') > -1 || p.toString.indexOf('$') > -1) {
Expand Down Expand Up @@ -749,6 +750,8 @@ abstract class RegExpExtractBase
// last regex pattern, we cache it for performance concern
@transient private var pattern: Pattern = _

final override val nodePatterns: Seq[TreePattern] = Seq(REGEXP_EXTRACT_FAMILY)

override def inputTypes: Seq[AbstractDataType] = Seq(StringType, StringType, IntegerType)
override def first: Expression = subject
override def second: Expression = regexp
Expand Down
Loading

0 comments on commit 81d33d7

Please sign in to comment.