Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sanic server killed #1518

Closed
SamuelBonilla opened this issue Mar 16, 2019 · 12 comments
Closed

Sanic server killed #1518

SamuelBonilla opened this issue Mar 16, 2019 · 12 comments

Comments

@SamuelBonilla
Copy link

Describe the bug
I'm getting an error with runtime sanic server, my server often falls down

Code snippet
Executing <TimerHandle functools.partial(<function update_current_time at 0x7f8a629aa268>, <uvloop.Loop running=True closed=False debug=True>) created at /usr/local/lib/python3.7/site-packages/sanic/server.py:547> took 0.105 seconds
Executing <TimerHandle functools.partial(<function update_current_time at 0x7f8a629aa268>, <uvloop.Loop running=True closed=False debug=True>) created at /usr/local/lib/python3.7/site-packages/sanic/server.py:547> took 1.148 seconds
Killed

Environment:

  • OS: Ubuntu
  • Digital Ocean
@omarryhan
Copy link
Contributor

RIP :-/

@omarryhan
Copy link
Contributor

Can you share more code so that we can reproduce the error?

@SamuelBonilla
Copy link
Author

Hi @omarryhan ;

Full console error :

[2019-03-16 21:15:06 -0500] [1099] [DEBUG] CORS: Request to '/' matches CORS resource '/'. Using options: {'origins': ['.'], 'methods': 'DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT', 'allow_headers': ['.'], 'expose_headers': None, 'supports_credentials': False, 'max_age': None, 'send_wildcard': False, 'automatic_options': False, 'vary_header': True, 'resources': '/', 'intercept_exceptions': True, 'always_send': True}
[2019-03-16 21:15:06 -0500] [1099] [DEBUG] CORS: CORS was handled in the exception handler, skipping
Executing <TimerHandle functools.partial(<function update_current_time at 0x7fb4280a6268>, <uvloop.Loop running=True closed=False debug=True>) created at /usr/local/lib/python3.7/site-packages/sanic/server.py:547> took 0.114 seconds
Executing <TimerHandle functools.partial(<function update_current_time at 0x7f8a629aa268>, <uvloop.Loop running=True closed=False debug=True>) created at /usr/local/lib/python3.7/site-packages/sanic/server.py:547> took 0.105 seconds
Executing <TimerHandle functools.partial(<function update_current_time at 0x7f8a629aa268>, <uvloop.Loop running=True closed=False debug=True>) created at /usr/local/lib/python3.7/site-packages/sanic/server.py:547> took 1.148 seconds
Killed

my server code:

from sanic import Blueprint, Sanic, response
from sanic_cors import CORS, cross_origin
from api.v1 import my_blueprint
from model.db import db
from firebase.firebaseMain import default_app, auth
import os

app = Sanic(__name__)
app.config.db = db
app.config.firebase = {'app': default_app, 'auth': auth}
app.config.user = None
app.config.domine = "http://mi_domine.com/v1/"
app.config.KEEP_ALIVE = False

app.blueprint(my_blueprint)

heroku = False
port = 8080
if heroku:
    port = os.environ['PORT']

CORS(app)

app.run(host='0.0.0.0', port=port, debug=True, access_log=False, worker=4)

@harshanarayana
Copy link
Contributor

@SamuelBonilla Based on what I can see in the logs it might be related to #1500.

If you are in a linux box, can you check your kernel logs are see if any of the process is getting killed due to some reason?

@SamuelBonilla
Copy link
Author

@harshanarayana I'm using a docker container

image

root@ubuntu-s-1vcpu-1gb-sfo2-01:~# docker logs shopd
1:C 02 Mar 2019 20:05:15.395 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 02 Mar 2019 20:05:15.395 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 02 Mar 2019 20:05:15.395 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 02 Mar 2019 20:05:15.396 * Running mode=standalone, port=6379.
1:M 02 Mar 2019 20:05:15.396 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 02 Mar 2019 20:05:15.396 # Server initialized
1:M 02 Mar 2019 20:05:15.396 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 02 Mar 2019 20:05:15.396 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 02 Mar 2019 20:05:15.397 * DB loaded from disk: 0.001 seconds
1:M 02 Mar 2019 20:05:15.397 * Ready to accept connections

@sjsadowski
Copy link
Contributor

@SamuelBonilla How are your resource limits set for the containers? Are you using straight docker, swarm, k8s, rancher, or something else?

I'm wondering if you're hitting a resource constraint.

@SamuelBonilla
Copy link
Author

SamuelBonilla commented Mar 17, 2019

@sjsadowski I'm using straight docker, my docker container have'n resource limits...
my sanic server is runing in port 8080, but I have a question
why when a run sanic server other python processes are created ?

root@d766284d89a0# lsof -P -i -n
COMMAND   PID USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
redis-ser   1 root    6u  IPv6 38493464      0t0  TCP *:6379 (LISTEN)
redis-ser   1 root    7u  IPv4 38493465      0t0  TCP *:6379 (LISTEN)
python     66 root    3u  IPv4 38500825      0t0  TCP 172.17.0.3:53718->50.19.208.11:5432 (ESTABLISHED)
python     66 root    5u  IPv4 38500861      0t0  TCP 172.17.0.3:47884->172.217.5.106:443 (ESTABLISHED)
python     66 root    6u  IPv4 38500865      0t0  TCP 172.17.0.3:50192->172.217.6.74:443 (ESTABLISHED)
python     70 root    3u  IPv4 38500825      0t0  TCP 172.17.0.3:53718->50.19.208.11:5432 (ESTABLISHED)
python     70 root    5u  IPv4 38500861      0t0  TCP 172.17.0.3:47884->172.217.5.106:443 (ESTABLISHED)
python     70 root    6u  IPv4 38500865      0t0  TCP 172.17.0.3:50192->172.217.6.74:443 (ESTABLISHED)
python     72 root    3u  IPv4 38500917      0t0  TCP 172.17.0.3:53724->50.19.208.11:5432 (ESTABLISHED)
python     72 root    5u  IPv4 38500926      0t0  TCP 172.17.0.3:47890->172.217.5.106:443 (ESTABLISHED)
python     72 root    6u  IPv4 38500930      0t0  TCP 172.17.0.3:50198->172.217.6.74:443 (ESTABLISHED)
python     72 root   16u  IPv4 38500935      0t0  TCP *:8080 (LISTEN)

Memory use

root@d766284d89a0# ps_mem
 Private  +   Shared  =  RAM used	Program

200.0 KiB + 198.0 KiB = 398.0 KiB	sh
  1.7 MiB + 385.5 KiB =   2.0 MiB	bash
  2.5 MiB + 377.0 KiB =   2.9 MiB	redis-server
 52.0 MiB +  46.8 MiB =  98.8 MiB	python3.7 (3)
---------------------------------
                        104.1 MiB
=================================

Total memory in docker container

root@d766284d89a0# free -m
              total        used        free      shared  buff/cache   available
Mem:            992         215         104          10         672         506
Swap:             0           0           0


@vltr
Copy link
Member

vltr commented Mar 18, 2019

This definitely looks related to #1500, as @harshanarayana already stated. @SamuelBonilla , what's your Sanic version? In case you can update to 19.3, try it and see how it goes. If not, 18.12.1 LTS is just around the corner and will have a backport of a memory leak fix that probably is the root cause of your problems 😉

@sjsadowski
Copy link
Contributor

Psst @vltr 19.3 had a pypi issue, it's not quite there yet.

@vltr
Copy link
Member

vltr commented Mar 18, 2019

@sjsadowski I just saw that, sorry 😬

@SamuelBonilla , just hold on while we figure this out 😉

@sjsadowski
Copy link
Contributor

@SamuelBonilla Can you try again with 19.03.1 and see if the issue remains?

@sjsadowski
Copy link
Contributor

Closing due to lack of further response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants