Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ML] Adjust model memory limit for module jobs #41135

Closed
pheyos opened this issue Jul 15, 2019 · 2 comments · Fixed by #41747
Closed

[ML] Adjust model memory limit for module jobs #41135

pheyos opened this issue Jul 15, 2019 · 2 comments · Fixed by #41747

Comments

@pheyos
Copy link
Member

pheyos commented Jul 15, 2019

Currently, some jobs created via ML modules have a too high model memory limit configured (e.g. 256mb for metricbeat jobs). This could cause issues on low memory clusters (e.g. cloud with a free 1gb ML node) during module execution where all jobs are created and then started almost at the same time.

Suggestion: Revisit the configured model memory limit.

cc: @sophiec20 @zanbel

@pheyos pheyos added the :ml label Jul 15, 2019
@elasticmachine
Copy link
Contributor

Pinging @elastic/ml-ui

@sophiec20
Copy link
Contributor

Multi-metric wizard uses following estimation for partition field detectors with metric functions -- this is an approximation, so also round up:
10mb + (64k * num partitions * num detectors) + (10k * num influencers)

If we assume 100 hosts as a high upper water mark and 10 event.datasets then for the following metricbeat jobs (fieldnames are not perfect ECS):

  • max disk utilization
    • max(filesystem.used.pct) partition=host.name inf=host.name
    • 25mb
  • metricbeat outages
    • low_count partition=event.dataset inf=event.dataset
    • 15mb
  • high cpu iowait
    • high_mean(cpu.iowait.pct) partition=host.name inf=host.name
    • 25mb

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants