Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error bounds / probabilities / skewness as first-class Druid query results #7160

Open
leventov opened this issue Feb 28, 2019 · 2 comments
Open

Comments

@leventov
Copy link
Member

leventov commented Feb 28, 2019

Describing Online Aggregation, I suggested that when Broker sends partial results back to the client it also sends a flag indicating that the partial aggregation results may be skewed. It may also send estimated error / confidence intervals of the partial aggregation values, if it is able to compute them for the given aggregation function, and if the user opts to receive such data.

I think this idea shouldn't be confined to partial query results during online aggregation and could equally apply to "final" query results (equivalent to "offline" query results).

Some of the sources of inconsistencies / error / variance:

  • Limitations of the distributed query execution: see an issue regarding TopN Aliasing (where @drcrallen gives a direct example of variance between topN results from different data nodes. See also a related join query discussion.
  • Time trends for single-valued query types such as topN and groupBy: relative results of for different dimension values (grouping keys) may have a time trend that is averaged out by the final aggregation and thus invisible to the user.
  • Significant variance between different partitions within the same time interval might mean that there is simply not enough data to draw reliable conclusions from the final results. In some contexts this is OK (usually when making a topN or count query we are really interested in absolute values, for example, count(log_lines) where error=true), but in other cases, namely when we are interested in proportion, relative values, and trends, we should at least make users aware of the fact that results may include significant error.
  • Natural probabilistic nature of many Druid aggregators such as quantiles, sketches, HLL, etc. including those that back up classic SQL query types under the covers. See, for example, Inconsistencies in the result of the quantile aggregator #6099 and the discussion on replacing the default implementation behind DISTINCT COUNT from one probabilistic structure to another.
  • Something else?

As well as with Online Aggregation, work should be done on both the backend (Druid itself) and frontend side of UIs querying into Druid to support this and bring value to users.

In terms of antifragility, the current Druid's error-oblivious approach to query results may be classified as fragile. The approach that makes errors first-class query results might be classified as resilient or perhaps even antifragile because it might help users to learn something new about their data during abrupt events.

FYI @gianm @mistercrunch @vogievetsky @julianhyde @leerho @weijietong

@leerho
Copy link
Contributor

leerho commented Feb 28, 2019

I strongly support the concept that any aggregation that returns approximate results also return a means for the user to establish the likely bounds on the error along with the corresponding confidence interval.

Please note that all of the sketches in the DataSketches library provide both a-priori and a-posteriori error estimation methods.

Also, please do not confuse the built-in Druid Approximate Histogram with the DataSketches Quantiles sketch which can also produce an approximate histogram. The built-in Druid Approximate Histogram is very data sensitive and cannot provide any error guarantees. It also does not qualify as a "sketch" largely because of these issues, it is a purely empirical algorithm. Please see this comparative study.

@leerho
Copy link
Contributor

leerho commented Mar 1, 2019

The discussion #6099 confuses these two algorithms by associating the size and error table of the DataSketches Quantiles DoublesSketch with the Druid built-in Approximate Histogram.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants