Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Ingest Manager] Make setupIngestManager wait if setup is in progress #70008

Merged
merged 4 commits into from
Jun 30, 2020

Conversation

jfsiii
Copy link
Contributor

@jfsiii jfsiii commented Jun 25, 2020

Summary

closes #69843
Draft while waiting for CI to pass and for other tests to be added

Handle the case when setupIngestManager is called before the first has had a chance to complete. Chose to await instead of returning early or erroring since that's the closest to what we're doing now.

We likely want to eventually split the status up into more granular settings like "hasDefaultPackages", "hasDefaultOutput", "hasDefaultConfig", etc but I think this works

export async function setupIngestManager(
soClient: SavedObjectsClientContract,
callCluster: CallESAsCurrentUser
) {
const [installedPackages, defaultOutput, config] = await Promise.all([
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is from moving the body inside a try/catch

);
if (!packageShouldBeInstalled) {
continue;
try {
Copy link
Contributor

@neptunian neptunian Jun 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason we can't put this whole try/catch block into the else above so we don't need to run all this code again since we now know that it ran/is running? Would speed things up a bit. I tested it out and it seems to work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would only run the setup one time which isn't the current or previous behavior. I'm trying to avoid any changes to the prior flow / guarantees or only make the smallest incremental changes required.

We'll definitely revisit this and other parts of the setup in later PRs but for now I want small conservative changes.

Copy link
Contributor

@neptunian neptunian Jun 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, but we were only ever running the entire setup twice because we couldn't track the first. Now there is no need. Otherwise this change is going to cause the Ingest Manager setup to take longer because we have to await the first call before running the whole of the second and awaiting that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, but we were only ever running the entire setup twice because we couldn't track the first. Now there is no need.

Then maybe we can remove the second call? Either way, I'm starting with stability then we can move on to speed.

Copy link
Contributor

@neptunian neptunian Jun 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still need the second call, which functions more like a check now, because its purpose is to not let the user interact with Ingest Manager until setup has completed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should talk about this outside this PR. In person or another issue. As far as I can tell, this PR fixes the known bugs and tests are passing. I'd like to take the win and do more in another effort.

They wouldn't remain and seem to cause an issue with ensureDefaultConfig or something default config related.

Here's some logs from a failing run which had some additional logging

<details><summary>Failing log</summary>

```
proc [kibana] spawing 3 calls to setupIngestManager
   │ proc [kibana] has setupIngestStatus. is pending
   │ proc [kibana] has setupIngestStatus. is pending
   │ proc [kibana] has setupIngestStatus. is pending
   │ proc [kibana] after await setupIngestStatus
   │ proc [kibana] before promise.all
   │ proc [kibana] after await setupIngestStatus
   │ proc [kibana] before promise.all
   │ proc [kibana] after await setupIngestStatus
   │ proc [kibana] before promise.all
   │ info [o.e.c.m.MetadataMappingService] [JFSIII.local] [.kibana_2/d9f18zusTYONXlP_UrRRCQ] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [JFSIII.local] [.kibana_2/d9f18zusTYONXlP_UrRRCQ] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [JFSIII.local] [.kibana_2/d9f18zusTYONXlP_UrRRCQ] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [JFSIII.local] [.kibana_2/d9f18zusTYONXlP_UrRRCQ] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [JFSIII.local] [.kibana_2/d9f18zusTYONXlP_UrRRCQ] update_mapping [_doc]
   │ proc [kibana] after promise.all. get config.id config1
   │ proc [kibana] after promise.all. get config.id config1
   │ proc [kibana] after promise.all. get config.id config1
   │ info [o.e.c.m.MetadataMappingService] [JFSIII.local] [.kibana_2/d9f18zusTYONXlP_UrRRCQ] update_mapping [_doc]
   │ proc [kibana] setupIngestManager error { Error: [ingest-agent-configs:config1]: version conflict, required seqNo [274], primary term [1]. current document has seqNo [275] and primary term [1]: [version_conflict_engine_exception] [ingest-agent-configs:config1]: version conflict, required seqNo [274], primary term [1]. current document has seqNo [275] and primary term [1], with { index_uuid="d9f18zusTYONXlP_UrRRCQ" & shard="0" & index=".kibana_2" }
   │ proc [kibana]     at respond (/Users/jfsiii/work/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)
   │ proc [kibana]     at checkRespForFailure (/Users/jfsiii/work/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)
   │ proc [kibana]     at HttpConnector.<anonymous> (/Users/jfsiii/work/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
   │ proc [kibana]     at IncomingMessage.wrapper (/Users/jfsiii/work/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4929:19)
   │ proc [kibana]     at IncomingMessage.emit (events.js:203:15)
   │ proc [kibana]     at endReadableNT (_stream_readable.js:1145:12)
   │ proc [kibana]     at process._tickCallback (internal/process/next_tick.js:63:19)
   │ proc [kibana]   status: 409,
   │ proc [kibana]   displayName: 'Conflict',
   │ proc [kibana]   message:
   │ proc [kibana]    '[ingest-agent-configs:config1]: version conflict, required seqNo [274], primary term [1]. current document has seqNo [275] and primary term [1]: [version_conflict_engine_exception] [ingest-agent-configs:config1]: version conflict, required seqNo [274], primary term [1]. current document has seqNo [275] and primary term [1], with { index_uuid="d9f18zusTYONXlP_UrRRCQ" & shard="0" & index=".kibana_2" }',
   │ proc [kibana]   path: '/.kibana/_update/ingest-agent-configs%3Aconfig1',
   │ proc [kibana]   query: { refresh: 'wait_for' },
   │ proc [kibana]   body:
   │ proc [kibana]    { error:
   │ proc [kibana]       { root_cause: [Array],
   │ proc [kibana]         type: 'version_conflict_engine_exception',
   │ proc [kibana]         reason:
   │ proc [kibana]          '[ingest-agent-configs:config1]: version conflict, required seqNo [274], primary term [1]. current document has seqNo [275] and primary term [1]',
   │ proc [kibana]         index_uuid: 'd9f18zusTYONXlP_UrRRCQ',
   │ proc [kibana]         shard: '0',
   │ proc [kibana]         index: '.kibana_2' },
   │ proc [kibana]      status: 409 },
   │ proc [kibana]   statusCode: 409,
   │ proc [kibana]   response:
   │ proc [kibana]    '{"error":{"root_cause":[{"type":"version_conflict_engine_exception","reason":"[ingest-agent-configs:config1]: version conflict, required seqNo [274], primary term [1]. current document has seqNo [275] and primary term [1]","index_uuid":"d9f18zusTYONXlP_UrRRCQ","shard":"0","index":".kibana_2"}],"type":"version_conflict_engine_exception","reason":"[ingest-agent-configs:config1]: version conflict, required seqNo [274], primary term [1]. current document has seqNo [275] and primary term [1]","index_uuid":"d9f18zusTYONXlP_UrRRCQ","shard":"0","index":".kibana_2"},"status":409}',
   │ proc [kibana]   toString: [Function],
   │ proc [kibana]   toJSON: [Function],
   │ proc [kibana]   isBoom: true,
   │ proc [kibana]   isServer: false,
   │ proc [kibana]   data: null,
   │ proc [kibana]   output:
   │ proc [kibana]    { statusCode: 409,
   │ proc [kibana]      payload:
   │ proc [kibana]       { message:
   │ proc [kibana]          '[ingest-agent-configs:config1]: version conflict, required seqNo [274], primary term [1]. current document has seqNo [275] and primary term [1]: [version_conflict_engine_exception] [ingest-agent-configs:config1]: version conflict, required seqNo [274], primary term [1]. current document has seqNo [275] and primary term [1], with { index_uuid="d9f18zusTYONXlP_UrRRCQ" & shard="0" & index=".kibana_2" }',
   │ proc [kibana]         statusCode: 409,
   │ proc [kibana]         error: 'Conflict' },
   │ proc [kibana]      headers: {} },
   │ proc [kibana]   reformat: [Function],
   │ proc [kibana]   [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/conflict' }
   │ proc [kibana] setupIngestManager ok
   │ proc [kibana] setupIngestManager ok
   │ proc [kibana] has setupIngestStatus. is pending
   │ proc [kibana] after await setupIngestStatus
   │ proc [kibana] before promise.all
   │ proc [kibana] after promise.all. get config.id config1
   │ proc [kibana] setupIngestManager ok
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [JFSIII.local] updated role [fleet_enroll]
   │ info [o.e.x.s.a.u.TransportPutUserAction] [JFSIII.local] updated user [fleet_enroll]
   │ proc [kibana] setupFleet before getDefaultOutputId
   │ proc [kibana] setupFleet after getDefaultOutputId aed7a97e-9212-4ff9-b562-8b45f085f0d6
   │ proc [kibana] setupFleet before updateOutput aed7a97e-9212-4ff9-b562-8b45f085f0d6 { fleet_enroll_username: 'fleet_enroll',
   │ proc [kibana]   fleet_enroll_password: 'OWE3MTM3NzMtZDM4Zi00OGUwLWFkYmMtZDMwNzE0YjMwMGY2' }
   │ proc [kibana] setupFleet before generateEnrollmentAPIKey { name: 'Default', configId: 'config1' }
   │ proc [kibana] server   error  [09:30:33.314]  Error: Internal Server Error
   │ proc [kibana]     at HapiResponseAdapter.toError (/Users/jfsiii/work/kibana/src/core/server/http/router/response_adapter.ts:129:19)
   │ proc [kibana]     at HapiResponseAdapter.toHapiResponse (/Users/jfsiii/work/kibana/src/core/server/http/router/response_adapter.ts:79:19)
   │ proc [kibana]     at HapiResponseAdapter.handle (/Users/jfsiii/work/kibana/src/core/server/http/router/response_adapter.ts:74:17)
   │ proc [kibana]     at Router.handle (/Users/jfsiii/work/kibana/src/core/server/http/router/router.ts:263:34)
   │ proc [kibana]     at process._tickCallback (internal/process/next_tick.js:68:7)
```
</details>
@jfsiii
Copy link
Contributor Author

jfsiii commented Jun 26, 2020

@elasticmachine merge upstream

@jfsiii jfsiii marked this pull request as ready for review June 26, 2020 17:56
@jfsiii jfsiii requested a review from a team June 26, 2020 17:56
@botelastic botelastic bot added the Team:Fleet Team label for Observability Data Collection Fleet team label Jun 26, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/ingest-management (Team:Ingest Management)

@nchaulet nchaulet closed this Jun 26, 2020
@nchaulet nchaulet reopened this Jun 26, 2020
@@ -30,70 +30,93 @@ import { appContextService } from './app_context';
const FLEET_ENROLL_USERNAME = 'fleet_enroll';
const FLEET_ENROLL_ROLE = 'fleet_enroll';

// the promise which tracks the setup
let setupIngestStatus: Promise<void> | undefined;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not going to work if we have multiple instances of Kibana no? Can we use a saved object, to keep the state of the setup?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I wondered about the same things. From Slack the other day:

I initially had some logic for dealing with pending & succeeded state but they way we tried initially had issues in multi-kibana setups (and tests, iirc). We might need to move the state out of JS and into ES or somewhere else that's durable & shared by multiple Kibana

I did this because it's the simplest thing change that could possibly work, but I do think we'll want to move it to Saved Objects eventually.

Along those lines and https://github.com/elastic/kibana/issues/63123 perhaps we want a GET route to match /fleet/setup?

Do you think these need to be done in this PR or can they be discussed/implemented in a followup issue?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am in favor of doing it in this PR as I think this is probably not going to solve our problem

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the failure in multi-kibana exist in the code before #69505, e.g. in 7.8?

I'll see how quickly I can do a dead-simple boolean but, like I said in the description, I think we'll want more fine-grained permissions and I'd like to avoid designing that during the PR or writing such short-lived code if this works. I know, "if" it works.

@jfsiii jfsiii added release_note:skip Skip the PR/issue when compiling release notes v7.9.0 v8.0.0 labels Jun 26, 2020
@skh
Copy link
Contributor

skh commented Jun 30, 2020

Given that this is a non-trivial problem, would it make sense to open follow-up issues for

  • the multi-kibana case (I would assume we run into more issues there anyway as we haven't tested it at all, or have we?)
  • optimizing for speed again

and merging this one so we can move on? @neptunian @nchaulet what do you think?

@neptunian
Copy link
Contributor

neptunian commented Jun 30, 2020

My issue is the code no longer makes sense to me and increases the setup time. Here, the second call is now awaiting the first call (increases time) to wait for setup to complete, and then running through the setup again knowing that the first has already completed successfully. I think it would make sense to clear it up in this PR.

Since it does fix the race condition issue, I suppose we can have followup PRs after @nchaulet approves his request.

Copy link
Member

@nchaulet nchaulet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok with this as a temporary solution that fix local development, but we should have a follow up issue

@ph
Copy link
Contributor

ph commented Jun 30, 2020

lets try to have a followup soon for the multi kibana issues as @nchaulet mentioned, this would be a problem for 7.9 and the scaling concerns.

@jfsiii
Copy link
Contributor Author

jfsiii commented Jun 30, 2020

@neptunian

My issue is the code no longer makes sense to me

That's good feedback. I'll try to clear it up

and increases the setup time. Here, the second call is now awaiting the first call (increases time) to wait for setup to complete

This seems like a fundamental tradeoff we're making (correctness for speed). If we call the same endpoint/function multiple times we can either execute them a) in parallel/concurrently or b) in series/consecutively. master and 7.8 does (a) and that's a key source of the double config bug. Also, it's not clear if "longer" is "substantially longer". Many of the functions have early exits if they've already run, so it's not 2 * first_setup_time.

then running through the setup again knowing that the first has already completed successfully

That's the status quo behavior. Multiple HTTP calls result in multiple calls to that function. I talked about this some in #70008 (comment) but we're making a change at the lowest level of the plugin. So much code was written with a given behavior I don't think it's wise to make large changes to it. This way we can see if this works as expected and move forward with further refactoring.

@jfsiii
Copy link
Contributor Author

jfsiii commented Jun 30, 2020

created #70333 for follow-up

@jfsiii
Copy link
Contributor Author

jfsiii commented Jun 30, 2020

@elasticmachine merge upstream

@kibanamachine
Copy link
Contributor

💛 Build succeeded, but was flaky


Test Failures

Kibana Pipeline / kibana-oss-agent / Firefox UI Functional Tests.test/functional/apps/visualize/input_control_vis/chained_controls·js.visualize app input controls chained controls should create a seperate filter pill for parent control and child control

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 2 times on tracked branches: https://github.com/elastic/kibana/issues/70295

[00:00:00]       │
[00:11:22]         └-: visualize app
[00:11:22]           └-> "before all" hook
[00:11:22]           └-> "before all" hook
[00:11:22]             │ debg Starting visualize before method
[00:11:22]             │ info [logstash_functional] Loading "mappings.json"
[00:11:22]             │ info [logstash_functional] Loading "data.json.gz"
[00:11:22]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [logstash-2015.09.22] creating index, cause [api], templates [], shards [1]/[0], mappings [_doc]
[00:11:22]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.22][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.22][0]]"
[00:11:22]             │ info [logstash_functional] Created index "logstash-2015.09.22"
[00:11:22]             │ debg [logstash_functional] "logstash-2015.09.22" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:11:22]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [logstash-2015.09.20] creating index, cause [api], templates [], shards [1]/[0], mappings [_doc]
[00:11:22]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.20][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.20][0]]"
[00:11:22]             │ info [logstash_functional] Created index "logstash-2015.09.20"
[00:11:22]             │ debg [logstash_functional] "logstash-2015.09.20" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:11:22]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [logstash-2015.09.21] creating index, cause [api], templates [], shards [1]/[0], mappings [_doc]
[00:11:22]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.21][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.21][0]]"
[00:11:22]             │ info [logstash_functional] Created index "logstash-2015.09.21"
[00:11:22]             │ debg [logstash_functional] "logstash-2015.09.21" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:11:32]             │ info progress: 13846
[00:11:32]             │ info [logstash_functional] Indexed 4633 docs into "logstash-2015.09.22"
[00:11:32]             │ info [logstash_functional] Indexed 4757 docs into "logstash-2015.09.20"
[00:11:32]             │ info [logstash_functional] Indexed 4614 docs into "logstash-2015.09.21"
[00:11:32]             │ info [long_window_logstash] Loading "mappings.json"
[00:11:32]             │ info [long_window_logstash] Loading "data.json.gz"
[00:11:32]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [long-window-logstash-0] creating index, cause [api], templates [], shards [1]/[0], mappings [_doc]
[00:11:32]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[long-window-logstash-0][0]]])." previous.health="YELLOW" reason="shards started [[long-window-logstash-0][0]]"
[00:11:32]             │ info [long_window_logstash] Created index "long-window-logstash-0"
[00:11:32]             │ debg [long_window_logstash] "long-window-logstash-0" settings {"index":{"analysis":{"analyzer":{"makelogs_url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:11:42]             │ info progress: 11740
[00:11:44]             │ info [long_window_logstash] Indexed 14005 docs into "long-window-logstash-0"
[00:11:44]             │ info [visualize] Loading "mappings.json"
[00:11:44]             │ info [visualize] Loading "data.json"
[00:11:44]             │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana_1/PAnU8E5tT1C6vOsHx0bL4A] deleting index
[00:11:44]             │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana_2/5VqHUejsQyeKoxdyoQazMQ] deleting index
[00:11:44]             │ info [visualize] Deleted existing index [".kibana_2",".kibana_1"]
[00:11:44]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [_doc]
[00:11:44]             │ info [visualize] Created index ".kibana"
[00:11:44]             │ debg [visualize] ".kibana" settings {"index":{"number_of_replicas":"1","number_of_shards":"1"}}
[00:11:44]             │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana/bpXhxgkZSw-3OB8KXONhkA] update_mapping [_doc]
[00:11:44]             │ info [visualize] Indexed 12 docs into ".kibana"
[00:11:44]             │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana/bpXhxgkZSw-3OB8KXONhkA] update_mapping [_doc]
[00:11:44]             │ debg Migrating saved objects
[00:11:44]             │ proc [kibana]   log   [15:41:21.569] [info][savedobjects-service] Creating index .kibana_2.
[00:11:44]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana_2] creating index, cause [api], templates [], shards [1]/[1], mappings [_doc]
[00:11:44]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] updating number_of_replicas to [0] for indices [.kibana_2]
[00:11:44]             │ proc [kibana]   log   [15:41:21.606] [info][savedobjects-service] Reindexing .kibana to .kibana_1
[00:11:44]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana_1] creating index, cause [api], templates [], shards [1]/[1], mappings [_doc]
[00:11:44]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] updating number_of_replicas to [0] for indices [.kibana_1]
[00:11:45]             │ info [o.e.t.LoggingTaskListener] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] 8967 finished with response BulkByScrollResponse[took=27.3ms,timed_out=false,sliceId=null,updated=0,created=12,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]
[00:11:45]             │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana/bpXhxgkZSw-3OB8KXONhkA] deleting index
[00:11:45]             │ proc [kibana]   log   [15:41:21.920] [info][savedobjects-service] Migrating .kibana_1 saved objects to .kibana_2
[00:11:45]             │ proc [kibana]   log   [15:41:21.932] [error][savedobjects-service] Error: Unable to migrate the corrupt Saved Object document index-pattern:test_index*. To prevent Kibana from performing a migration on every restart, please delete or fix this document by ensuring that the namespace and type in the document's id matches the values in the namespace and type fields.
[00:11:45]             │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana_2/oPqCfCT7TrydGjHn1qb24g] update_mapping [_doc]
[00:11:45]             │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana_2/oPqCfCT7TrydGjHn1qb24g] update_mapping [_doc]
[00:11:45]             │ proc [kibana]   log   [15:41:21.992] [info][savedobjects-service] Pointing alias .kibana to .kibana_2.
[00:11:45]             │ proc [kibana]   log   [15:41:22.029] [info][savedobjects-service] Finished in 461ms.
[00:11:45]             │ debg applying update to kibana config: {"accessibility:disableAnimations":true,"dateFormat:tz":"UTC"}
[00:11:45]             │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana_2/oPqCfCT7TrydGjHn1qb24g] update_mapping [_doc]
[00:11:45]             │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana_2/oPqCfCT7TrydGjHn1qb24g] update_mapping [_doc]
[00:11:46]             │ debg replacing kibana config doc: {"defaultIndex":"logstash-*","format:bytes:defaultPattern":"0,0.[000]b"}
[00:11:46]             │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-debian-tests-xl-1593530169180376662] [.kibana_2/oPqCfCT7TrydGjHn1qb24g] update_mapping [_doc]
[00:11:48]           └-: 
[00:11:48]             └-> "before all" hook
[00:11:48]             └-: input controls
[00:11:48]               └-> "before all" hook
[00:11:48]               └-: chained controls
[00:11:48]                 └-> "before all" hook
[00:11:48]                 └-> "before all" hook
[00:11:48]                   │ debg navigating to visualize url: http://localhost:6111/app/visualize#/
[00:11:48]                   │ debg navigate to: http://localhost:6111/app/visualize#/
[00:11:48]                   │ debg browser[log] "^ A single error about an inline script not firing due to content security policy is expected!"
[00:11:48]                   │ debg ... sleep(700) start
[00:11:48]                   │ debg ... sleep(700) end
[00:11:48]                   │ debg returned from get, calling refresh
[00:11:49]                   │ debg browser[info] "INFO: 2020-06-30T15:41:26Z
[00:11:49]                   │        Adding connection to http://localhost:6111/elasticsearch
[00:11:49]                   │
[00:11:49]                   │      "
[00:11:49]                   │ERROR browser[error] (new TypeError("NetworkError when attempting to fetch resource.", ""))
[00:11:49]                   │ debg browser[log] "^ A single error about an inline script not firing due to content security policy is expected!"
[00:11:50]                   │ debg currentUrl = http://localhost:6111/app/visualize#/
[00:11:50]                   │          appUrl = http://localhost:6111/app/visualize#/
[00:11:50]                   │ debg TestSubjects.find(kibanaChrome)
[00:11:50]                   │ debg Find.findByCssSelector('[data-test-subj="kibanaChrome"]') with timeout=60000
[00:11:51]                   │ debg browser[info] "INFO: 2020-06-30T15:41:27Z
[00:11:51]                   │        Adding connection to http://localhost:6111/elasticsearch
[00:11:51]                   │
[00:11:51]                   │      "
[00:11:51]                   │ debg ... sleep(501) start
[00:11:52]                   │ debg ... sleep(501) end
[00:11:52]                   │ debg in navigateTo url = http://localhost:6111/app/visualize#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15m,to:now))
[00:11:52]                   │ debg --- retry.try error: URL changed, waiting for it to settle
[00:11:52]                   │ debg ... sleep(501) start
[00:11:53]                   │ debg ... sleep(501) end
[00:11:53]                   │ debg in navigateTo url = http://localhost:6111/app/visualize#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15m,to:now))
[00:11:53]                   │ debg TestSubjects.exists(statusPageContainer)
[00:11:53]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="statusPageContainer"]') with timeout=2500
[00:11:55]                   │ debg --- retry.tryForTime error: [data-test-subj="statusPageContainer"] is not displayed
[00:11:56]                   │ debg TestSubjects.click(visListingTitleLink-chained-input-control)
[00:11:56]                   │ debg Find.clickByCssSelector('[data-test-subj="visListingTitleLink-chained-input-control"]') with timeout=10000
[00:11:56]                   │ debg Find.findByCssSelector('[data-test-subj="visListingTitleLink-chained-input-control"]') with timeout=10000
[00:11:56]                   │ debg isGlobalLoadingIndicatorVisible
[00:11:56]                   │ debg TestSubjects.exists(globalLoadingIndicator)
[00:11:56]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="globalLoadingIndicator"]') with timeout=1500
[00:11:56]                   │ debg TestSubjects.exists(globalLoadingIndicator-hidden)
[00:11:56]                   │ debg Find.existsByCssSelector('[data-test-subj="globalLoadingIndicator-hidden"]') with timeout=100000
[00:11:56]                 └-> should disable child control when parent control is not set
[00:11:56]                   └-> "before each" hook: global before each
[00:11:56]                   │ debg comboBox.getOptionsList, comboBoxSelector: listControlSelect0
[00:11:56]                   │ debg TestSubjects.find(listControlSelect0)
[00:11:56]                   │ debg Find.findByCssSelector('[data-test-subj="listControlSelect0"]') with timeout=10000
[00:11:57]                   │ debg TestSubjects.click(listControlSelect0)
[00:11:57]                   │ debg Find.clickByCssSelector('[data-test-subj="listControlSelect0"]') with timeout=10000
[00:11:57]                   │ debg Find.findByCssSelector('[data-test-subj="listControlSelect0"]') with timeout=10000
[00:11:57]                   │ debg browser[log] "Detected an unhandled Promise rejection.
[00:11:57]                   │      TypeError: this.uiSettings is undefined"
[00:11:58]                   │ debg TestSubjects.exists(~comboBoxOptionsList)
[00:11:58]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="comboBoxOptionsList"]') with timeout=2500
[00:11:58]                   │ debg TestSubjects.find(~comboBoxOptionsList)
[00:11:58]                   │ debg Find.findByCssSelector('[data-test-subj~="comboBoxOptionsList"]') with timeout=10000
[00:11:58]                   │ debg TestSubjects.exists(~comboBoxOptionsList)
[00:11:58]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="comboBoxOptionsList"]') with timeout=2500
[00:11:59]                   │ debg Find.findByCssSelector('[data-test-subj="inputControl1"] input') with timeout=10000
[00:11:59]                   └- ✓ pass  (2.3s) "visualize app  input controls chained controls should disable child control when parent control is not set"
[00:11:59]                 └-> should filter child control options by parent control value
[00:11:59]                   └-> "before each" hook: global before each
[00:11:59]                   │ debg comboBox.set, comboBoxSelector: listControlSelect0
[00:11:59]                   │ debg TestSubjects.find(listControlSelect0)
[00:11:59]                   │ debg Find.findByCssSelector('[data-test-subj="listControlSelect0"]') with timeout=10000
[00:11:59]                   │ debg comboBox.setElement, value: BR
[00:11:59]                   │ debg comboBox.isOptionSelected, value: BR
[00:12:01]                   │ debg TestSubjects.exists(~comboBoxOptionsList)
[00:12:01]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="comboBoxOptionsList"]') with timeout=2500
[00:12:01]                   │ debg Find.allByCssSelector('.euiFilterSelectItem[title^="BR"]') with timeout=2500
[00:12:01]                   │ debg TestSubjects.exists(~comboBoxOptionsList)
[00:12:01]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="comboBoxOptionsList"]') with timeout=2500
[00:12:01]                   │ debg comboBox.getOptionsList, comboBoxSelector: listControlSelect1
[00:12:01]                   │ debg TestSubjects.find(listControlSelect1)
[00:12:01]                   │ debg Find.findByCssSelector('[data-test-subj="listControlSelect1"]') with timeout=10000
[00:12:01]                   │ debg TestSubjects.click(listControlSelect1)
[00:12:01]                   │ debg Find.clickByCssSelector('[data-test-subj="listControlSelect1"]') with timeout=10000
[00:12:01]                   │ debg Find.findByCssSelector('[data-test-subj="listControlSelect1"]') with timeout=10000
[00:12:03]                   │ debg TestSubjects.exists(~comboBoxOptionsList)
[00:12:03]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="comboBoxOptionsList"]') with timeout=2500
[00:12:03]                   │ debg TestSubjects.find(~comboBoxOptionsList)
[00:12:03]                   │ debg Find.findByCssSelector('[data-test-subj~="comboBoxOptionsList"]') with timeout=10000
[00:12:03]                   │ debg TestSubjects.exists(~comboBoxOptionsList)
[00:12:03]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="comboBoxOptionsList"]') with timeout=2500
[00:12:03]                   └- ✓ pass  (4.2s) "visualize app  input controls chained controls should filter child control options by parent control value"
[00:12:03]                 └-> should create a seperate filter pill for parent control and child control
[00:12:03]                   └-> "before each" hook: global before each
[00:12:03]                   │ debg comboBox.set, comboBoxSelector: listControlSelect1
[00:12:03]                   │ debg TestSubjects.find(listControlSelect1)
[00:12:03]                   │ debg Find.findByCssSelector('[data-test-subj="listControlSelect1"]') with timeout=10000
[00:12:03]                   │ debg comboBox.setElement, value: 14.61.182.136
[00:12:03]                   │ debg comboBox.isOptionSelected, value: 14.61.182.136
[00:12:05]                   │ debg TestSubjects.exists(~comboBoxOptionsList)
[00:12:05]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="comboBoxOptionsList"]') with timeout=2500
[00:12:05]                   │ debg Find.allByCssSelector('.euiFilterSelectItem[title^="14.61.182.136"]') with timeout=2500
[00:12:05]                   │ debg TestSubjects.exists(~comboBoxOptionsList)
[00:12:05]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="comboBoxOptionsList"]') with timeout=2500
[00:12:06]                   │ debg TestSubjects.clickWhenNotDisabled(inputControlSubmitBtn)
[00:12:06]                   │ debg Find.clickByCssSelectorWhenNotDisabled('[data-test-subj="inputControlSubmitBtn"]') with timeout=10000
[00:12:06]                   │ debg Find.findByCssSelector('[data-test-subj="inputControlSubmitBtn"]') with timeout=10000
[00:12:06]                   │ debg Waiting up to 20000ms for rendering count to stabilize...
[00:12:06]                   │ debg TestSubjects.find(visualizationLoader)
[00:12:06]                   │ debg Find.findByCssSelector('[data-test-subj="visualizationLoader"]') with timeout=10000
[00:12:06]                   │ debg -- firstCount=1
[00:12:06]                   │ debg ... sleep(1000) start
[00:12:07]                   │ debg ... sleep(1000) end
[00:12:07]                   │ debg TestSubjects.find(visualizationLoader)
[00:12:07]                   │ debg Find.findByCssSelector('[data-test-subj="visualizationLoader"]') with timeout=10000
[00:12:07]                   │ debg -- secondCount=2
[00:12:08]                   │ debg TestSubjects.find(visualizationLoader)
[00:12:08]                   │ debg Find.findByCssSelector('[data-test-subj="visualizationLoader"]') with timeout=10000
[00:12:08]                   │ debg -- firstCount=2
[00:12:08]                   │ debg ... sleep(1000) start
[00:12:09]                   │ debg ... sleep(1000) end
[00:12:09]                   │ debg TestSubjects.find(visualizationLoader)
[00:12:09]                   │ debg Find.findByCssSelector('[data-test-subj="visualizationLoader"]') with timeout=10000
[00:12:09]                   │ debg -- secondCount=2
[00:12:09]                   │ debg TestSubjects.exists(filter filter-enabled filter-key-geo.src filter-value-BR filter-unpinned)
[00:12:09]                   │ debg Find.existsByCssSelector('[data-test-subj="filter filter-enabled filter-key-geo.src filter-value-BR filter-unpinned"]') with timeout=2500
[00:12:11]                   │ info Taking screenshot "/dev/shm/workspace/kibana/test/functional/screenshots/failure/visualize app  input controls chained controls should create a seperate filter pill for parent control and child control.png"
[00:12:11]                   │ info Current URL is: http://localhost:6111/app/visualize#/edit/68305470-87bc-11e9-a991-3b492a7c3e09?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15m,to:now))&_a=(filters:!((%27$state%27:(store:appState),meta:(alias:!n,controlledBy:%271559757816862%27,disabled:!f,index:%27logstash-*%27,key:geo.src,negate:!f,params:(query:BR),type:phrase),query:(match_phrase:(geo.src:BR))),(%27$state%27:(store:appState),meta:(alias:!n,controlledBy:%271559757836347%27,disabled:!f,index:%27logstash-*%27,key:clientip,negate:!f,params:(query:%2714.61.182.136%27),type:phrase),query:(match_phrase:(clientip:%2714.61.182.136%27)))),linked:!f,query:(language:kuery,query:%27%27),uiState:(),vis:(aggs:!(),params:(controls:!((fieldName:geo.src,id:%271559757816862%27,indexPattern:%27logstash-*%27,label:%27%27,options:(dynamicOptions:!t,multiselect:!t,order:desc,size:5,type:terms),parent:%27%27,type:list),(fieldName:clientip,id:%271559757836347%27,indexPattern:%27logstash-*%27,label:%27%27,options:(dynamicOptions:!t,multiselect:!t,order:desc,size:5,type:terms),parent:%271559757816862%27,type:list)),pinFilters:!f,updateFiltersOnChange:!f,useTimeFilter:!f),title:%27chained%20input%20control%27,type:input_control_vis))
[00:12:11]                   │ info Saving page source to: /dev/shm/workspace/kibana/test/functional/failure_debug/html/visualize app  input controls chained controls should create a seperate filter pill for parent control and child control.html
[00:12:11]                   └- ✖ fail: "visualize app  input controls chained controls should create a seperate filter pill for parent control and child control"
[00:12:11]                   │

Stack Trace

Error: expected false to equal true
    at Assertion.assert (packages/kbn-expect/expect.js:100:11)
    at Assertion.equal (packages/kbn-expect/expect.js:227:8)
    at Context.it (test/functional/apps/visualize/input_control_vis/chained_controls.js:63:41)
    at process._tickCallback (internal/process/next_tick.js:68:7)

Build metrics

✅ unchanged

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

@jfsiii
Copy link
Contributor Author

jfsiii commented Jun 30, 2020

Going to merge b/c the failure is seems unrelated to our plugin or this code and has failed before.

@neptunian
Copy link
Contributor

neptunian commented Jun 30, 2020

That's the status quo behavior. Multiple HTTP calls result in multiple calls to that function. I talked about this some in #70008 (comment) but we're making a change at the lowest level of the plugin. So much code was written with a given behavior I don't think it's wise to make large changes to it. This way we can see if this works as expected and move forward with further refactoring.

Okay, thats fine. But just to be clear, I expect multiple calls to the function but not to executing what's in the try block that starts running the actual setup code and checks. We can bail out since we know the setup already completed successfully. So the hypothetical 2nd call could be a "check to see if there is already a call, if so, await. then do nothing, setup is done " instead of "check to see if there is already a call, if so, await, then run and await the setup code again (for no reason)". Fine to refactor later. Also not totally sure how this will work with using the SO to track the state, instead, if we can't track the promise.

@jfsiii
Copy link
Contributor Author

jfsiii commented Jun 30, 2020

@neptunian I understand what you're saying. I left out a part

Multiple HTTP calls result in multiple calls to that function which executes the same steps no matter how many times it's been called already

The current behavior has no idea if prior call succeeded, failed, or is pending. The logic is the same for each run. It calls the same functions each time. Those functions might early exit or do something else, but there is no "did setup run before" logic. I explained why I don't want to change that.

At this point we're each repeating ourselves so I don't think it's worth continuing here.

@jfsiii jfsiii merged commit 2118439 into elastic:master Jun 30, 2020
Bamieh pushed a commit to Bamieh/kibana that referenced this pull request Jul 1, 2020
…elastic#70008)

* Make setupIngestManager wait if another setup is in progress
@jen-huang
Copy link
Contributor

Kicking off a backport for this to avoid potential conflicts with other incoming backports.

jen-huang added a commit that referenced this pull request Jul 2, 2020
…#70008) (#70522)

* Make setupIngestManager wait if another setup is in progress

Co-authored-by: John Schulz <john.schulz@elastic.co>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release_note:skip Skip the PR/issue when compiling release notes Team:Fleet Team label for Observability Data Collection Fleet team v7.9.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Ingest Manager] Two default configs are created after app loads
8 participants