Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create built in alert type for index threshold #53041

Closed
mikecote opened this issue Dec 13, 2019 · 11 comments
Closed

Create built in alert type for index threshold #53041

mikecote opened this issue Dec 13, 2019 · 11 comments
Assignees
Labels
Feature:Alerting Team:ResponseOps Label for the ResponseOps team (formerly the Cases and Alerting teams)

Comments

@mikecote
Copy link
Contributor

mikecote commented Dec 13, 2019

TODOs from initial PR #48959.

  • Remove all calls to watcher apis (x-pack/legacy/plugins/triggers_actions_ui/np_ready/public/application/components/builtin_alert_types/threshold/lib/api.ts)
  • Split alert_add component into multiple files ( x-pack/legacy/plugins/triggers_actions_ui/np_ready/public/application/sections/alert_add/alert_add.tsx)
  • Split expression into multiple files (x-pack/legacy/plugins/triggers_actions_ui/np_ready/public/application/sections/alert_add/alert_types/threshold/expression.tsx)
@mikecote mikecote added Feature:Alerting Team:ResponseOps Label for the ResponseOps team (formerly the Cases and Alerting teams) labels Dec 13, 2019
@mikecote
Copy link
Contributor Author

Pinging @elastic/kibana-alerting-services (Team:Alerting Services)

@pmuellr
Copy link
Member

pmuellr commented Jan 23, 2020

I'm looking into replacing the watcher APIs, specifically if there are already some other APIs we can use rather than copy the watcher ones. The watcher server APIs we use are:

  • /api/watcher/indices
  • /api/watcher/fields
  • /api/watcher/watch/visualize

For the second - /api/watcher/fields - there is one API in the data plugin to list the fields available in a "pattern" of indices (and there are other plugins like lens that have similar APIs). Haven't found an equivalent of /api/watcher/indices and I suspect /api/watcher/watch/visualize may be so specialized there isn't an equivalent.

So seems like copying the watcher ones, maybe trimming them down if they have more function than we need, will be the cleanest way.

Here's an example of the data plugin endpoint though, for fields:

$ curl -v -k https://elastic:changeme@localhost:5601/api/index_patterns/_fields_for_wildcard?pattern=.kibana* | json

{
  "fields": [
    {
      "name": "@timestamp",
      "type": "date",
      "esTypes": [
        "date"
      ],
      "searchable": true,
      "aggregatable": true,
      "readFromDocValues": true
    },
...

code is here: https://github.com/elastic/kibana/blob/master/src/plugins/data/server/index_patterns/routes.ts

@pmuellr
Copy link
Member

pmuellr commented Jan 23, 2020

I was curious what the existing watcher APIs we use do, so here's some results:

$ curl -v -k https://elastic:changeme@localhost:5601/api/watcher/indices \
    -H "kbn-xsrf: foo" \
    -d '{"pattern": "*"}' | json
{
  "indices": [
    ".kibana"
    ".kibana_1",
    ...
  ]
}

$ curl -v -k https://elastic:changeme@localhost:5601/api/watcher/fields \
    -H "content-type: application/json" -H "kbn-xsrf: foo" \
    -d '{"indexes": [".kibana"]}' | json
{
  "fields": [
    {
      "name": "action.actionTypeId",
      "type": "keyword",
      "normalizedType": "keyword",
      "aggregatable": true,
      "searchable": true
    },
    ...
  ]
}

I wasn't sure what the inputs for the /api/watcher/watch/visualize route were, to test it live, but here's some sample output from a jest test. Guessing the result is the result of an ES query provided via the request body, the embedded array is always two dimensional with time always first element, result of the aggregation used as the second, name of the aggregation as the top level property name.

const WATCH_VISUALIZE_DATA = {
  count: [
    [1559404800000, 14],
    [1559448000000, 196],
    [1559491200000, 44],
  ],
};

@pmuellr
Copy link
Member

pmuellr commented Jan 27, 2020

It looks like the data plugin can probably provide two pieces of what we need:

  • list of indices, via kibana index patterns
  • list of fields in an index pattern

Here's the plugin start interface:

export interface DataPublicPluginStart {
autocomplete: AutocompleteStart;
indexPatterns: IndexPatternsContract;
search: ISearchStart;
fieldFormats: FieldFormatsStart;
query: QueryStart;
ui: {
IndexPatternSelect: React.ComponentType<IndexPatternSelectProps>;
SearchBar: React.ComponentType<StatefulSearchBarProps>;
};
}

There is a query service as well, but looks like perhaps it only deals with saved searches, and I don't think we want to make a customer create a saved search just to use it in the alert.

Assuming we can use the data plugin to replace watcher api usage for indices/fields, it's still going to be limited to what the user has created in terms of kibana index patterns. Currently the watcher APIs return all the indices available. So, it's more limiting than the watcher APIs, however it should also be familiar to existing Kibana users who are likely already using Kibana index patterns. Seems like they're hard to avoid :-)

If all that's right, then it's a matter of replacing the calls to the watcher api in the ui plugin, with calls to the data plugin instead, when getting lists of indices and fields. Guessing we'll want a new http endpoint to run the query though. We'll need it in the alert type, for it to make the es calls, and so can then just expose an endpoint to the particular bit that makes the es call, for the ui's need of running it to get data to display in the visualization.

@pmuellr
Copy link
Member

pmuellr commented Jan 27, 2020

Looked a bit deeper, it looks like search returned from the data plugin setup can do arbitrary ES queries, so seems like we won't need any server-side endpoints!

@pmuellr
Copy link
Member

pmuellr commented Jan 29, 2020

so seems like we won't need any server-side endpoints

Re-thinking this. I think I'd like to have the chart data be sourced from the alert-type itself, rather than getting the chart data from an independent browser-based query. If for some reason we change the query in the alert-type, we'd have to make a corresponding change to the browser-based one, and ... we'll forget, for sure :-)

I'm going to see if this can be designed so the two queries - what the alert-type runs during it's interval executions, and what is run to generate chart data, share as much as possible the same query structure.

@mikecote
Copy link
Contributor Author

Re-thinking this. I think I'd like to have the chart data be sourced from the alert-type itself, rather than getting the chart data from an independent browser-based query. If for some reason we change the query in the alert-type, we'd have to make a corresponding change to the browser-based one, and ... we'll forget, for sure :-)

I'm going to see if this can be designed so the two queries - what the alert-type runs during it's interval executions, and what is run to generate chart data, share as much as possible the same query structure.

This would be awesome and easier for the developer!

@pmuellr
Copy link
Member

pmuellr commented Jan 29, 2020

Well I'm the developer, and I do always like making things easier for myself.

I think you're suggesting that we try to set a good example for other alert-type implementors, since they will presumably have the same issue.

I've been wondering if we can build this into the alerting framework itself - we can start by allowing an alert-type to provide an additional function to generate visualization data. And then have a new http endpoint in alerting to request that data, which would call that function. What the inputs and outputs of that http endpoint are - that's where it gets tricky as it's likely to be very alert-type-specific.

@gmmorris
Copy link
Contributor

I like that too.
Do you envision this chart query servicing the chart in Alert Details too, then?
Same query and we just overlay Alert Instances over it too?

@pmuellr
Copy link
Member

pmuellr commented Feb 1, 2020

Ya, unless for some reason it becomes clear that it doesn't make sense, I think having the same chart in create available in details should be easy and useful. I suspect we may want to show some chart data AFTER the date of the triggered event, so a customer can see if the event got worse, or resolved.

@pmuellr
Copy link
Member

pmuellr commented Feb 28, 2020

With the built-in index threshold alertType PR about to be merged, I'm going to start in on the following:

  • replace existing watcher API usage, with data plugin calls and call to new http endpoint in the PR above to get the graph data
  • fix references to alert ids, param names, etc that may be different between what the code currently uses, and what the actual alert requires

Expectation is that this should get us to the point where you can create/edit index threshold alerts, and won't need a gold+ license (watcher APIs don't work in basic).

pmuellr added a commit that referenced this issue Feb 28, 2020
)

Adds the first built-in alertType for Kibana alerting, an index threshold alert, and associated HTTP endpoint to generate preview data for it.

addresses the server-side requirements for issue  #53041
pmuellr added a commit to pmuellr/kibana that referenced this issue Feb 28, 2020
…stic#57030)

Adds the first built-in alertType for Kibana alerting, an index threshold alert, and associated HTTP endpoint to generate preview data for it.

addresses the server-side requirements for issue  elastic#53041
pmuellr added a commit that referenced this issue Feb 28, 2020
) (#58901)

Adds the first built-in alertType for Kibana alerting, an index threshold alert, and associated HTTP endpoint to generate preview data for it.

addresses the server-side requirements for issue  #53041
pmuellr added a commit that referenced this issue Mar 6, 2020
…ew API (#59385)

Changes the alerting UI to use the new time series query HTTP endpoint provided by the builtin index threshold alertType; previously it used a watcher HTTP endpoint.

This is part of the ongoing index threshold work tracked in #53041
pmuellr added a commit to pmuellr/kibana that referenced this issue Mar 6, 2020
…ew API (elastic#59385)

Changes the alerting UI to use the new time series query HTTP endpoint provided by the builtin index threshold alertType; previously it used a watcher HTTP endpoint.

This is part of the ongoing index threshold work tracked in elastic#53041
pmuellr added a commit that referenced this issue Mar 6, 2020
…ew API (#59385) (#59557)

Changes the alerting UI to use the new time series query HTTP endpoint provided by the builtin index threshold alertType; previously it used a watcher HTTP endpoint.

This is part of the ongoing index threshold work tracked in #53041
@pmuellr pmuellr closed this as completed in 3f365a8 Mar 9, 2020
pmuellr added a commit to pmuellr/kibana that referenced this issue Mar 9, 2020
…elastic#59475)

Prior to this PR, the alerting UI used two HTTP endpoints provided by the
Kibana watcher plugin, to list index and field names.  There are now two HTTP
endpoints in the alerting_builtins plugin which will be used instead.

The code for the new endpoints was largely copied from the existing watcher
endpoints, and the HTTP request/response bodies kept pretty much the same.

resolves elastic#53041
pmuellr added a commit that referenced this issue Mar 10, 2020
…#59475) (#59713)

Prior to this PR, the alerting UI used two HTTP endpoints provided by the
Kibana watcher plugin, to list index and field names.  There are now two HTTP
endpoints in the alerting_builtins plugin which will be used instead.

The code for the new endpoints was largely copied from the existing watcher
endpoints, and the HTTP request/response bodies kept pretty much the same.

resolves #53041
jkelastic pushed a commit to jkelastic/kibana that referenced this issue Mar 12, 2020
…ew API (elastic#59385)

Changes the alerting UI to use the new time series query HTTP endpoint provided by the builtin index threshold alertType; previously it used a watcher HTTP endpoint.

This is part of the ongoing index threshold work tracked in elastic#53041
jkelastic pushed a commit to jkelastic/kibana that referenced this issue Mar 12, 2020
…elastic#59475)

Prior to this PR, the alerting UI used two HTTP endpoints provided by the
Kibana watcher plugin, to list index and field names.  There are now two HTTP
endpoints in the alerting_builtins plugin which will be used instead.

The code for the new endpoints was largely copied from the existing watcher
endpoints, and the HTTP request/response bodies kept pretty much the same.

resolves elastic#53041
@kobelb kobelb added the needs-team Issues missing a team label label Jan 31, 2022
@botelastic botelastic bot removed the needs-team Issues missing a team label label Jan 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature:Alerting Team:ResponseOps Label for the ResponseOps team (formerly the Cases and Alerting teams)
Projects
None yet
Development

No branches or pull requests

5 participants