You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, some jobs created via ML modules have a too high model memory limit configured (e.g. 256mb for metricbeat jobs). This could cause issues on low memory clusters (e.g. cloud with a free 1gb ML node) during module execution where all jobs are created and then started almost at the same time.
Suggestion: Revisit the configured model memory limit.
Multi-metric wizard uses following estimation for partition field detectors with metric functions -- this is an approximation, so also round up: 10mb + (64k * num partitions * num detectors) + (10k * num influencers)
If we assume 100 hosts as a high upper water mark and 10 event.datasets then for the following metricbeat jobs (fieldnames are not perfect ECS):
Currently, some jobs created via ML modules have a too high model memory limit configured (e.g. 256mb for metricbeat jobs). This could cause issues on low memory clusters (e.g. cloud with a free 1gb ML node) during module execution where all jobs are created and then started almost at the same time.
Suggestion: Revisit the configured model memory limit.
cc: @sophiec20 @zanbel
The text was updated successfully, but these errors were encountered: