-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node ignoring --max-old-space-size #6153
Comments
Just re-read your message, can you post the entirety of the top output? It cuts off at |
Here's the full command via ps aux. ps aux|grep node kibana 17468 0.8 21.7 1766484 852536 pts/0 Sl 13:20 0:52 /opt/kibana/bin/../node/bin/node --expose-gc --trace-gc --trace-gc-verbose --max-old-space-size=200 /opt/kibana/bin/../src/cli |
Changed the title, haven't seen this happen anywhere, will try to replicate |
Is this happening when Kibana is just sitting? Are you installing any plugins? Optimize running? |
Kibana is just sitting. No Dashboard created. No plugins installed. I haven't optimize anything in kibana. I just added the ES index pattern to kibana at the settings when it first came up. After restarting kibana, not even accessing it on browser and memory is growing. The only optimization i remember doing was with curator, i told it to optimize indexes on ES. Would that cause kibana to memory leak? |
Kibana has a built in code optimizing that is separate from the elasticsearch _optimize operation. It can possible cause memory to increase temporarily. We're looking into this. |
do you need any logs from my installation? Anything you want me to test/try? Currently kibana hasn't killed my vm with setting old-space-size=200M. It using 1gig of memory but haven't killed it like when I had it running default with nothing set. It looks like it stabilized at 1gig, but that's still huge amount of memory to be holding idle. 60742 elastics 20 0 3607m 1.8g 114m S 0.3 47.3 20:22.77 /usr/bin/java -Xms1500M -Xmx1500M -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSw 17468 kibana 20 0 1900m 1.0g 6916 S 0.7 27.7 14:18.74 /opt/kibana/bin/../node/bin/node --expose-gc --trace-gc --trace-gc-verbose --max-old-space-siz from my messages log, oom killed kibana with default setting at around 1.8+g ram used when i originally ran it with no node_options. Feb 6 04:35:27 elks01 kernel: [60742] 498 60742 921715 454038 0 0 0 java Feb 6 04:35:27 elks01 kernel: [60857] 19538 60857 1898775 424612 0 0 0 node --snip- Feb 6 04:35:27 elks01 kernel: Out of memory: Kill process 60857 (node) score 710 or sacrifice child Feb 6 04:35:27 elks01 kernel: Killed process 60857, UID 19538, (node) total-vm:7595100kB, anon-rss:1696804kB, file-rss:1644kB Update: |
Any update on this issue? We're experiencing the same thing |
We've been trying to reproduce this, we've tried with a variety of settings and aren't seeing any memory increase over time even when setting
|
So what now. Since I'm not the only one seeing this, there must be a common setup that triggers this issue. My kibana test server hasn't run out of memory yet, but using 1g of ram when idle with a setting of 200M is kind of bad. On a new 8gig vm, i untared the kiban tar.gz into /opt/kibana, move it out of the kibana-4.4-linux-x86_64 directory. stuck my kibana.yml into the config dir with the apropriate certs for ssl to es. add the --max-old-space-size=200 parameter into kibana in the bin and fired it up as root. cd bin;./kibana& In 11minutes, the result: 36024 root 20 0 1129m 223m 9248 S 1.3 2.8 0:11.27 ./../node/bin/node --max-old-space-size=200 ./../src/cli A ps: ps auxf|grep node root 36024 1.2 2.8 1160632 232736 pts/0 Sl 11:23 0:11 \_ ./../node/bin/node --max-old-space-size=200 ./../src/cli root 36370 0.0 0.0 103304 876 pts/0 S+ 11:38 0:00 \_ grep node Not even accessing this kibana instance. Its only listening to localhost. It continues to grow at 1M/3s. so it takes only ~11 minutes to go beyond the 200M limit. Could something in the kibana.index on the ES server be "corrupt"/messed up to cause this. Or is it something to do with ssl? |
so for grins. I did the following test: On the new vm with kibana, i change the config to point to a nonexistent ES on http://127.0.0.1:9200. Started up kibana. It complains endlessly in the log about missing ES, but the memory usage is a follow: 37084 root 20 0 821m 116m 8212 S 0.0 1.5 0:06.55 ./../node/bin/node --max-old-space-size=200 ./../src/cli not bad after 7 minutes. on the other kibana server. I removed the .kibana index and restarted kibana, pointing to the ES behind the ssl proxy with ssl certs configured in kibana. 9826 kibana 20 0 1161m 319m 9240 S 0.3 8.4 0:18.36 /opt/kibana/bin/../node/bin/node --max-old-space-size=200 /opt/kibana/bin/../src/cli its memory is growing like before, at 319M and upward moving. Neither kibana is being access. But the kibana pointing at a es instance with certs is using memory at a good 1M/(3-5s) Definitely something is up with having ssl certs in kibana: 9826 kibana 20 0 1574m 717m 9240 S 0.7 18.7 0:42.61 /opt/kibana/bin/../node/bin/node --max-old-space-size=200 /opt/kibana/bin/../src/cli ps aux|grep kiban kibana 9826 0.8 18.7 1614488 736304 pts/1 Sl 12:05 0:42 /opt/kibana/bin/../node/bin/node --max-old-space-size=200 /opt/kibana/bin/../src/cli kib without ssl: 37084 root 20 0 830m 124m 8212 S 0.0 1.6 0:10.80 ./../node/bin/node --max-old-space-size=200 ./../src/cli ps aux|grep node root 37084 0.2 1.5 850076 127284 pts/0 Sl 12:02 0:10 ./../node/bin/node --max-old-space-size=200 ./../src/cli |
Which node.js version do you use? Any change when you switch e.g. to 4.3 or 5.6.0? |
Rashidkpc, same setup but without SSL, kibana maintains itself at 240M with a setting of --max-old-data-size=200. Looks like there is something weird when ssl is added to the equation. We can't run ES without SSL, so that's not an option. Per security protocol, all our web stuff needs to be https, so we have to put both ES and kibana behind an ssl proxy. As soon as i put ES behind an ssl proxy and add the ssl cert option onto kibana, it ramps up to 1.0-1.1 gig of memory resident when set to 200M for --max-old-data-size. megastef, |
same issue here with kibana 4.3.1 on a ES (2.1.1) cluster with SSL |
Same issue here on a clean Ubuntu 14.04 install with nothing but basic Ubuntu repositories and Kibana 4.4.1. Bundled NodeJS (v0.12.10). up: I was too quick in my conclusions. The |
The same issue on a clean Debian 8.3 with Kibana 4.4.1 (bundled nodejs (v0.12.10) and Elasticsearch 2.2.0. I have no SSL configuration. |
Hi, we made yesterday the update to Kibana 4.3.1 with Node.js 4.3.1 (yes not misstake :) and Elasticsearch 2.2.0 - and set Why is Kibana using Node 0.12? |
Guys, do your health checks return sometimes with timeouts? I just removed (uncommented health cheks in Kibana source code) and memory does not grow so fast as before. Kibana checks every few seconds the es version and status ...). Look at this chart >18:30 / after last restart. This is the line removed: https://github.com/elastic/kibana/blob/4.3.1/src/plugins/elasticsearch/index.js#L62 Maybe the Kibana team wants work on those healthCheck functions :) |
I can reproduce Kibana growing (or leaking) memory rather. It's idle; has data on it; timelion plugin installed. 3 hours ago it was at 1166416 bytes and now it's at 1489152 Unfortunately my startup command seems pretty generic nothing long
So not sure if this is the right issue? P.S. I run this under runit #!/bin/sh
version="4.4.0"
exec 2>&1
cd /opt/kibana-${version}-linux-x64
#cd /opt/kibana-4.1.1-linux-x64
exec chpst -l /var/run/kibana.lock -- bin/kibana That's my run file ES is in an Docker container; this is a Linux host running 3.17.0 kernel (bit old) nothing in the runit logs for the past week (haven't touched it since the 17th) |
Little Update, since I removed the health checks in my local Kibana, it runs with a stable memory usage <100 MB, see the chart: https://github.com/elastic/kibana/blob/4.4.1/src/plugins/elasticsearch/index.js#L62 -> // healthCheck(this, server).start(); |
My bad @megastef was right. Removing that line (commenting it out) works; I assumed it started a web server but it just creates the health check loop. Also it's /much/ faster. |
Bringing this comment from @megastef back here:
Maybe the issue here isn't the |
This is normally cpu flame graphs; but having the JIT working in perf may work for memory as well. http://www.brendangregg.com/blog/2014-09-17/node-flame-graphs-on-linux.html |
I'm able to reproduce this - I'm running kibana 4.4.1, as root (sorry) inside a docker container. I'm passing No ssl. Default kibana config except for elasticsearch url and kibana listening port. Maybe a couple minutes of use then sitting idle. |
Just pushed 4.4.2, 4.3.3 and 4.1.6, all with updated node versions: https://www.elastic.co/blog/kibana-4-4-2-and-4-3-3-and-4-1-6 This should fix the runaway memory issues users have been seeing, but we'd love some confirmation from some of the users that have commented here. When you get around to upgrading, please drop back in here and let us know if things look better for you too. Thanks! |
Hi, I am using Kibana 4.2.2. Should I try to change to another version or are you also going to update Kibana 4.2.2? Thanks. |
4.2.2 looks fixed in my use case! With --max-old-space-size=200 my instance stays well within that limit. Edit: I meant 4.4.2. |
4.2 is outside of our support window, and while 4.3 and 4.1 are also, we decided to over-deliver on those versions anyway. So you'd need to at least upgrade to 4.3.3 (and consequently ES 2.1 or higher if you're not there already) to get the fix. |
We are currently working on ES 2.0.2, which means that we are bound to Kibana 4.2.2 |
@eadgbe What is your obstacle to upgrading to ES 2.1? |
Hi guys, at the end I upgraded the cluster, and it is working. But, still needed to user the node's parameter: |
I'm going to close this since this should be fixed in recent versions of Kibana. There's also a feature request open to make this a little easier to configure: #6727 |
I encounter this problem in kibana 4.5.4 and elasticsearch2.3.4. I hope this will solve my issue.if I still have the problem ,Iwill report later. |
@mnhan3 how did you solve the kibana "--max-old-space-size" problem? |
I'm running Kibana v4.6.1 in a docker container based on CentOS 7. The RPM package contains Node v4.4.7:
I think the correct option for Node is
Kibana seems to be running fine with following start command and a memory limit of 300 MB for the docker container.
|
@bm-skutzke thanks a lot,all people write "--max-old-space-size" except you ,you point out the correct way is "--max_old_space_size" .I really appreciate it. |
What is the actual solution here? Upgrade or configuration? I'm running Kibana 4.3.2 and have tried both versions of the options
|
@Floresj4 The latest versions of 4.x and 5.x shouldn't need any max old space wrangling. |
@epixa thanks for the reply. Is there a solution that does not require upgrading? |
I'm not aware of any off-hand if the max_old_space_size setting isn't working for you. |
top output (kibana started with max-old-space-sze=200 along with gc tracing)
Nothing is accessing Kibana. There's only 1 index in elasticsearch. Kibana node process continues to eat memory until it gets killed by OOM.
stdout of gc tracing shows:
This is a RHEL 6 instance on vmware. It has 4gigs of ram allocated. Its running ES 2.2 and kibana 4.4 behind a web ssl proxy. How can I get Kibana to stop running the box out of ram. Already using lastest kibana release and using the old data flag.
Mike
The text was updated successfully, but these errors were encountered: