Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EOS leaks filedescriptors #268

Closed
jnweiger opened this issue May 12, 2020 · 19 comments
Closed

EOS leaks filedescriptors #268

jnweiger opened this issue May 12, 2020 · 19 comments
Assignees
Labels

Comments

@jnweiger
Copy link
Contributor

Test setup via https://gitea.owncloud.services/jw/hetzner/src/branch/master/make_ocis_test.sh

cd ocis
make generate build
./bin/ocis server

client connected via https://localhost:9200

Letting this sit for several hours, the logfile suddenly starts scrolling with

2020-05-12 10:03:21.559980 I | http: Accept error: accept tcp [::]:9200: accept4: too many open files; retrying in 1s
2020-05-12 10:03:22.560460 I | http: Accept error: accept tcp [::]:9200: accept4: too many open files; retrying in 1s
2020-05-12 10:03:23.560853 I | http: Accept error: accept tcp [::]:9200: accept4: too many open files; retrying in 1s
2020-05-12 10:03:24.561109 I | http: Accept error: accept tcp [::]:9200: accept4: too many open files; retrying in 1s
2020-05-12T10:03:25+02:00 WRN core access token not set pkg=rhttp service=reva traceid=1887309976bfc20028f17bde6fee08d6
2020-05-12T10:03:25+02:00 INF unary code=OK end="12/May/2020:10:03:25 +0200" from=tcp://127.0.0.1:50554 pkg=rgrpc service=reva start="12/May/2020:10:03:25 +0200" time_ns=41693 traceid=1887309976bfc20028f17bde6fee08d6 uri=/cs3.auth.registry.v1beta1.RegistryAPI/GetAuthProvider user-agent=grpc-go/1.26.0
2020-05-12T10:03:25+02:00 WRN root/go/pkg/mod/github.com/cs3org/reva@v0.1.1-0.20200427161359-c1549a8110eb/internal/grpc/services/authprovider/authprovider.go:107 > error authenticating user error="authsvc: error in Authenticate: oidc: error getting userinfo: +Get \"https://localhost:9200/konnect/v1/userinfo\": dial tcp: lookup localhost: device or resource busy" pkg=rgrpc service=reva traceid=1887309976bfc20028f17bde6fee08d6
2020-05-12T10:03:25+02:00 INF unary code=OK end="12/May/2020:10:03:25 +0200" from=tcp://127.0.0.1:57230 pkg=rgrpc service=reva start="12/May/2020:10:03:25 +0200" time_ns=923729 traceid=1887309976bfc20028f17bde6fee08d6 uri=/cs3.auth.provider.v1beta1.ProviderAPI/Authenticate user-agent=grpc-go/1.26.0
2020-05-12T10:03:25+02:00 ERR error authenticating credentials to auth provider for type: bearer error="gateway: grpc failed with code CODE_UNAUTHENTICATED" pkg=rgrpc service=reva traceid=1887309976bfc20028f17bde6fee08d6
2020-05-12T10:03:25+02:00 WRN root/go/pkg/mod/github.com/cs3org/reva@v0.1.1-0.20200427161359-c1549a8110eb/internal/grpc/services/gateway/authprovider.go:66 >  error="gateway: grpc failed with code CODE_UNAUTHENTICATED" pkg=rgrpc service=reva traceid=1887309976bfc20028f17bde6fee08d6
2020-05-12T10:03:25+02:00 INF unary code=OK end="12/May/2020:10:03:25 +0200" from=tcp://127.0.0.1:50552 pkg=rgrpc service=reva start="12/May/2020:10:03:25 +0200" time_ns=4476108 traceid=1887309976bfc20028f17bde6fee08d6 uri=/cs3.gateway.v1beta1.GatewayAPI/Authenticate user-agent=grpc-go/1.26.0
2020-05-12T10:03:25+02:00 ERR error generating access token from credentials error="auth: grpc failed with code CODE_UNAUTHENTICATED" pkg=rhttp service=reva traceid=1887309976bfc20028f17bde6fee08d6
2020-05-12T10:03:25+02:00 WRN http end="12/May/2020:10:03:25 +0200" host=127.0.0.1 method=GET pkg=rhttp proto=HTTP/1.1 service=reva size=0 start="12/May/2020:10:03:25 +0200" status=401 time_ns=5845048 traceid=1887309976bfc20028f17bde6fee08d6 uri=/ocs/v2.php/apps/notifications/api/v1/notifications?format=json url=/ocs/v2.php/apps/notifications/api/v1/notifications?format=json
2020-05-12 10:03:25.561552 I | http: Accept error: accept tcp [::]:9200: accept4: too many open files; retrying in 1s
2020-05-12 10:03:26.561949 I | http: Accept error: accept tcp [::]:9200: accept4: too many open files; retrying in 1s
2020-05-12 10:03:27.562247 I | http: Accept error: accept tcp [::]:9200: accept4: too many open files; retrying in 1s
2020-05-12 10:03:28.562658 I | http: Accept error: accept tcp [::]:9200: accept4: too many open files; retrying in 1s
2020-05-12 10:03:29.562970 I | http: Accept error: accept tcp [::]:9200: accept4: too many open files; retrying in 1s
2020-05-12 10:03:30.563322 I | http: Accept error: accept tcp [::]:9200: accept4: too many open files; retrying in 1s
2020-05-12 10:03:31.563623 I | http: Accept error: accept tcp [::]:9200: accept4: too many open files; retrying in 1s

Using lsof, it can be seen, that ca 20 processes opened every fildescriptor number from 0 to 1023. There seems to be a per process limit of 1024. OK.
All connections are idle, no client activity during the night. Ocis should close unused filedescriptors.

In this state, the system accepts connectsions, but then hangs:

telnet 116.203.242.4 9200
Trying 116.203.242.4...
Connected to 116.203.242.4.
Escape character is '^]'.
GET /

Incoming connections cause no log messages on the console output.

@individual-it
Copy link
Member

To force it happen earlier, run the acceptance tests, on my Ubuntu20.04 system I cannot run all UI tests without increasing the open-file limits

@micbar
Copy link
Contributor

micbar commented May 12, 2020

I am experimenting a bit with prometheus

open fds

Screenshot_2020-05-12 Prometheus Time Series Collection and Processing Server(6)

goroutines

Screenshot_2020-05-12 Prometheus Time Series Collection and Processing Server(7)

@individual-it @jnweiger

@individual-it
Copy link
Member

see also #222

@IljaN
Copy link
Member

IljaN commented May 29, 2020

Reason: OIDC-Requests originating from reva are keep-alive and not reused/cleaned up => Proxy keeps connection open indefinitely.

Fix: cs3org/reva#787

We probably also should harden the proxy against such cases.

@PVince81
Copy link
Contributor

PVince81 commented Jun 2, 2020

@IljaN is this done ? any module to update in ocis ?

@PVince81
Copy link
Contributor

PVince81 commented Jun 2, 2020

the fix that was done in ocis-reva is already present on the ocis master branch.

let me know if there are other fixes to get through updates

@IljaN
Copy link
Member

IljaN commented Jun 17, 2020

Fixed as https://github.com/cs3org/reva/tree/9b9f2e5af0e9216d59552f94e13416dad4dcc457/pkg is in the latest ocis-reva release which is in turn in the latest ocis release.

@IljaN IljaN closed this as completed Jun 17, 2020
@IljaN
Copy link
Member

IljaN commented Jun 17, 2020

@jnweiger Mind to re-test?

@jnweiger
Copy link
Contributor Author

jnweiger commented Sep 8, 2020

retested with

# uptime
 18:22:35 up 17:52,  1 user,  load average: 0.05, 0.15, 0.12
# lsof > lsof.out; grep xrootd lsof.out  | grep -v REG | wc -l
115829

# grep xrootd lsof.out  | grep -v REG | grep 289u | wc -l
361

Two patterns are recognizable in this system:

  • 361 processes with exactly 289 open filedescriptors and
  • 130 processes with exactly 97 open filedescriptors.

Example snippet from an lsof grep:

xrootd     27390  28914 xrootd                 bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  28915 xrootd                 bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  28916 xrootd                 bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  28917 xrootd                 bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  28918 default-e              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  28919 resolver-              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  28920 grpc_glob              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  28921 grpcpp_sy              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  28930 xrootd                 bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  28931 xrootd                 bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  28932 xrootd                 bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  28956 xrootd                 bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  28957 xrootd                 bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  29001 libmicroh              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  29002 libmicroh              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  29003 libmicroh              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  29004 libmicroh              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  29005 libmicroh              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  29006 libmicroh              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  31807 IOThreadP              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  31808 IOThreadP              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  31809 IOThreadP              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  31810 IOThreadP              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  31811 IOThreadP              bin  251u     sock                0,9       0t0      68497 protocol: UNIX
xrootd     27390  31812 IOThreadP              bin  251u     sock                0,9       0t0      68497 protocol: UNIX

@jnweiger
Copy link
Contributor Author

jnweiger commented Sep 8, 2020

Connecting, disconnecting clients, syncing, removing syncs, uploading files over the next hour has no effect on the number of xrootd fildescriptors.

root@jw-ocis-v1-0-0-rc1-eos-compose-jmgft:~# grep xrootd lsof.out.200908-212431 | grep -v REG | wc -l
115829
root@jw-ocis-v1-0-0-rc1-eos-compose-jmgft:~# grep xrootd lsof.out.200908-214448 | grep -v REG | wc -l
115829
root@jw-ocis-v1-0-0-rc1-eos-compose-jmgft:~# grep xrootd lsof.out.200908-222120 | grep -v REG | wc -l
115829
root@jw-ocis-v1-0-0-rc1-eos-compose-jmgft:~# grep xrootd lsof.out.200908-223459 | grep -v REG | wc -l
115829
root@jw-ocis-v1-0-0-rc1-eos-compose-jmgft:~# grep xrootd lsof.out.200908-223656 | grep -v REG | wc -l
115829

@jnweiger
Copy link
Contributor Author

Reproducable. This is a differnet system. it was used for manual testing today:

# uptime
 00:02:50 up 22:52,  1 user,  load average: 0.11, 0.09, 0.09
# grep xrootd lsof.out  | grep -v REG | wc -l
207602
# grep xrootd lsof.out  | grep -v REG | grep 474u | wc -l
409

@butonic
Copy link
Member

butonic commented Sep 11, 2020

@jnweiger @micbar xrootd is only used with eos. ocis.owncloud.works does not use eos. two different issues.

@butonic
Copy link
Member

butonic commented Sep 11, 2020

also locally the number of file descriptors is dropping again. I am using this command to list the number of open files per process:

sudo find -maxdepth 1 -type d -name '[0-9]*' \
     -exec bash -c "ls {}/fd/ | wc -l | tr '\n' ' '" \; \
     -printf "fds (PID = %P), command: " \
     -exec bash -c "tr '\0' ' ' < {}/cmdline" \; \
     -exec echo \; | sort -rn | grep 'ocis'

you should replace the final grep with whatever the binary is called. mine is called main.

  1. before litmus:
118 fds (PID = 7440), command: /tmp/go-build369324108/b001/exe/main accounts
117 fds (PID = 7337), command: /tmp/go-build369324108/b001/exe/main reva-gateway
114 fds (PID = 7272), command: /tmp/go-build369324108/b001/exe/main proxy
111 fds (PID = 7310), command: /tmp/go-build369324108/b001/exe/main ocs
110 fds (PID = 7274), command: /tmp/go-build369324108/b001/exe/main settings
109 fds (PID = 7315), command: /tmp/go-build369324108/b001/exe/main webdav
108 fds (PID = 7476), command: /tmp/go-build369324108/b001/exe/main api
108 fds (PID = 7273), command: /tmp/go-build369324108/b001/exe/main store
106 fds (PID = 7448), command: /tmp/go-build369324108/b001/exe/main glauth
105 fds (PID = 7460), command: /tmp/go-build369324108/b001/exe/main thumbnails
105 fds (PID = 7453), command: /tmp/go-build369324108/b001/exe/main konnectd
104 fds (PID = 7486), command: /tmp/go-build369324108/b001/exe/main web
104 fds (PID = 7304), command: /tmp/go-build369324108/b001/exe/main graph-explorer
104 fds (PID = 7296), command: /tmp/go-build369324108/b001/exe/main graph
104 fds (PID = 7290), command: /tmp/go-build369324108/b001/exe/main phoenix
103 fds (PID = 7496), command: /tmp/go-build369324108/b001/exe/main registry
102 fds (PID = 7555), command: /tmp/go-build369324108/b001/exe/main reva-sharing
102 fds (PID = 7330), command: /tmp/go-build369324108/b001/exe/main reva-frontend
101 fds (PID = 7425), command: /tmp/go-build369324108/b001/exe/main reva-storage-public-link
101 fds (PID = 7412), command: /tmp/go-build369324108/b001/exe/main reva-storage-oc
101 fds (PID = 7378), command: /tmp/go-build369324108/b001/exe/main reva-storage-home
101 fds (PID = 7353), command: /tmp/go-build369324108/b001/exe/main reva-auth-basic
101 fds (PID = 7344), command: /tmp/go-build369324108/b001/exe/main reva-users
101 fds (PID = 7263), command: /tmp/go-build369324108/b001/exe/main server
100 fds (PID = 7423), command: /tmp/go-build369324108/b001/exe/main reva-storage-oc-data
100 fds (PID = 7402), command: /tmp/go-build369324108/b001/exe/main reva-storage-eos-data
100 fds (PID = 7391), command: /tmp/go-build369324108/b001/exe/main reva-storage-eos
100 fds (PID = 7386), command: /tmp/go-build369324108/b001/exe/main reva-storage-home-data
100 fds (PID = 7365), command: /tmp/go-build369324108/b001/exe/main reva-auth-bearer
  1. after litmus
293 fds (PID = 7330), command: /tmp/go-build369324108/b001/exe/main reva-frontend
230 fds (PID = 7386), command: /tmp/go-build369324108/b001/exe/main reva-storage-home-data
210 fds (PID = 7272), command: /tmp/go-build369324108/b001/exe/main proxy
118 fds (PID = 7440), command: /tmp/go-build369324108/b001/exe/main accounts
117 fds (PID = 7337), command: /tmp/go-build369324108/b001/exe/main reva-gateway
111 fds (PID = 7310), command: /tmp/go-build369324108/b001/exe/main ocs
110 fds (PID = 7274), command: /tmp/go-build369324108/b001/exe/main settings
109 fds (PID = 7315), command: /tmp/go-build369324108/b001/exe/main webdav
108 fds (PID = 7476), command: /tmp/go-build369324108/b001/exe/main api
108 fds (PID = 7273), command: /tmp/go-build369324108/b001/exe/main store
106 fds (PID = 7448), command: /tmp/go-build369324108/b001/exe/main glauth
105 fds (PID = 7460), command: /tmp/go-build369324108/b001/exe/main thumbnails
105 fds (PID = 7453), command: /tmp/go-build369324108/b001/exe/main konnectd
104 fds (PID = 7486), command: /tmp/go-build369324108/b001/exe/main web
104 fds (PID = 7378), command: /tmp/go-build369324108/b001/exe/main reva-storage-home
104 fds (PID = 7304), command: /tmp/go-build369324108/b001/exe/main graph-explorer
104 fds (PID = 7296), command: /tmp/go-build369324108/b001/exe/main graph
104 fds (PID = 7290), command: /tmp/go-build369324108/b001/exe/main phoenix
103 fds (PID = 7496), command: /tmp/go-build369324108/b001/exe/main registry
102 fds (PID = 7555), command: /tmp/go-build369324108/b001/exe/main reva-sharing
101 fds (PID = 7425), command: /tmp/go-build369324108/b001/exe/main reva-storage-public-link
101 fds (PID = 7412), command: /tmp/go-build369324108/b001/exe/main reva-storage-oc
101 fds (PID = 7353), command: /tmp/go-build369324108/b001/exe/main reva-auth-basic
101 fds (PID = 7344), command: /tmp/go-build369324108/b001/exe/main reva-users
101 fds (PID = 7263), command: /tmp/go-build369324108/b001/exe/main server
100 fds (PID = 7423), command: /tmp/go-build369324108/b001/exe/main reva-storage-oc-data
100 fds (PID = 7402), command: /tmp/go-build369324108/b001/exe/main reva-storage-eos-data
100 fds (PID = 7391), command: /tmp/go-build369324108/b001/exe/main reva-storage-eos
100 fds (PID = 7365), command: /tmp/go-build369324108/b001/exe/main reva-auth-bearer
  1. and after 60sec all back to normal:
118 fds (PID = 7440), command: /tmp/go-build369324108/b001/exe/main accounts
117 fds (PID = 7337), command: /tmp/go-build369324108/b001/exe/main reva-gateway
114 fds (PID = 7272), command: /tmp/go-build369324108/b001/exe/main proxy
111 fds (PID = 7310), command: /tmp/go-build369324108/b001/exe/main ocs
110 fds (PID = 7274), command: /tmp/go-build369324108/b001/exe/main settings
109 fds (PID = 7315), command: /tmp/go-build369324108/b001/exe/main webdav
108 fds (PID = 7476), command: /tmp/go-build369324108/b001/exe/main api
108 fds (PID = 7273), command: /tmp/go-build369324108/b001/exe/main store
106 fds (PID = 7448), command: /tmp/go-build369324108/b001/exe/main glauth
105 fds (PID = 7460), command: /tmp/go-build369324108/b001/exe/main thumbnails
105 fds (PID = 7453), command: /tmp/go-build369324108/b001/exe/main konnectd
104 fds (PID = 7486), command: /tmp/go-build369324108/b001/exe/main web
104 fds (PID = 7304), command: /tmp/go-build369324108/b001/exe/main graph-explorer
104 fds (PID = 7296), command: /tmp/go-build369324108/b001/exe/main graph
104 fds (PID = 7290), command: /tmp/go-build369324108/b001/exe/main phoenix
103 fds (PID = 7496), command: /tmp/go-build369324108/b001/exe/main registry
102 fds (PID = 7555), command: /tmp/go-build369324108/b001/exe/main reva-sharing
102 fds (PID = 7330), command: /tmp/go-build369324108/b001/exe/main reva-frontend
101 fds (PID = 7425), command: /tmp/go-build369324108/b001/exe/main reva-storage-public-link
101 fds (PID = 7412), command: /tmp/go-build369324108/b001/exe/main reva-storage-oc
101 fds (PID = 7378), command: /tmp/go-build369324108/b001/exe/main reva-storage-home
101 fds (PID = 7353), command: /tmp/go-build369324108/b001/exe/main reva-auth-basic
101 fds (PID = 7344), command: /tmp/go-build369324108/b001/exe/main reva-users
101 fds (PID = 7263), command: /tmp/go-build369324108/b001/exe/main server
100 fds (PID = 7423), command: /tmp/go-build369324108/b001/exe/main reva-storage-oc-data
100 fds (PID = 7402), command: /tmp/go-build369324108/b001/exe/main reva-storage-eos-data
100 fds (PID = 7391), command: /tmp/go-build369324108/b001/exe/main reva-storage-eos
100 fds (PID = 7386), command: /tmp/go-build369324108/b001/exe/main reva-storage-home-data
100 fds (PID = 7365), command: /tmp/go-build369324108/b001/exe/main reva-auth-bearer

all back to normal ... on ocis storage ... testing with owncloud ...

@micbar
Copy link
Contributor

micbar commented Sep 11, 2020

@butonic We have also prometheus and grafana on ocis.owncloud.works

I checked it, everything normal.

@butonic
Copy link
Member

butonic commented Sep 11, 2020

  1. cold, after starting with owncloud storage:
115 fds (PID = 25164), command: /tmp/go-build744324587/b001/exe/main accounts
110 fds (PID = 25001), command: /tmp/go-build744324587/b001/exe/main settings
108 fds (PID = 25194), command: /tmp/go-build744324587/b001/exe/main api
106 fds (PID = 25065), command: /tmp/go-build744324587/b001/exe/main reva-gateway
105 fds (PID = 25000), command: /tmp/go-build744324587/b001/exe/main store
105 fds (PID = 24999), command: /tmp/go-build744324587/b001/exe/main proxy
104 fds (PID = 25203), command: /tmp/go-build744324587/b001/exe/main web
104 fds (PID = 25184), command: /tmp/go-build744324587/b001/exe/main thumbnails
104 fds (PID = 25181), command: /tmp/go-build744324587/b001/exe/main konnectd
104 fds (PID = 25042), command: /tmp/go-build744324587/b001/exe/main webdav
104 fds (PID = 25030), command: /tmp/go-build744324587/b001/exe/main ocs
104 fds (PID = 25029), command: /tmp/go-build744324587/b001/exe/main graph-explorer
104 fds (PID = 25027), command: /tmp/go-build744324587/b001/exe/main graph
104 fds (PID = 25015), command: /tmp/go-build744324587/b001/exe/main phoenix
103 fds (PID = 25214), command: /tmp/go-build744324587/b001/exe/main registry
101 fds (PID = 25175), command: /tmp/go-build744324587/b001/exe/main glauth
101 fds (PID = 25157), command: /tmp/go-build744324587/b001/exe/main reva-storage-public-link
101 fds (PID = 24989), command: /tmp/go-build744324587/b001/exe/main server
100 fds (PID = 25276), command: /tmp/go-build744324587/b001/exe/main reva-sharing
100 fds (PID = 25155), command: /tmp/go-build744324587/b001/exe/main reva-storage-oc-data
100 fds (PID = 25131), command: /tmp/go-build744324587/b001/exe/main reva-storage-oc
100 fds (PID = 25120), command: /tmp/go-build744324587/b001/exe/main reva-storage-eos-data
100 fds (PID = 25113), command: /tmp/go-build744324587/b001/exe/main reva-storage-eos
100 fds (PID = 25101), command: /tmp/go-build744324587/b001/exe/main reva-storage-home-data
100 fds (PID = 25092), command: /tmp/go-build744324587/b001/exe/main reva-storage-home
100 fds (PID = 25084), command: /tmp/go-build744324587/b001/exe/main reva-auth-bearer
100 fds (PID = 25082), command: /tmp/go-build744324587/b001/exe/main reva-auth-basic
100 fds (PID = 25081), command: /tmp/go-build744324587/b001/exe/main reva-users
100 fds (PID = 25056), command: /tmp/go-build744324587/b001/exe/main reva-frontend
  1. after litmus (which completes many more tests, so more connections are made)
355 fds (PID = 25056), command: /tmp/go-build744324587/b001/exe/main reva-frontend
246 fds (PID = 25101), command: /tmp/go-build744324587/b001/exe/main reva-storage-home-data
239 fds (PID = 24999), command: /tmp/go-build744324587/b001/exe/main proxy
117 fds (PID = 25164), command: /tmp/go-build744324587/b001/exe/main accounts
113 fds (PID = 25065), command: /tmp/go-build744324587/b001/exe/main reva-gateway
110 fds (PID = 25001), command: /tmp/go-build744324587/b001/exe/main settings
108 fds (PID = 25194), command: /tmp/go-build744324587/b001/exe/main api
106 fds (PID = 25175), command: /tmp/go-build744324587/b001/exe/main glauth
105 fds (PID = 25181), command: /tmp/go-build744324587/b001/exe/main konnectd
105 fds (PID = 25000), command: /tmp/go-build744324587/b001/exe/main store
104 fds (PID = 25203), command: /tmp/go-build744324587/b001/exe/main web
104 fds (PID = 25184), command: /tmp/go-build744324587/b001/exe/main thumbnails
104 fds (PID = 25042), command: /tmp/go-build744324587/b001/exe/main webdav
104 fds (PID = 25030), command: /tmp/go-build744324587/b001/exe/main ocs
104 fds (PID = 25029), command: /tmp/go-build744324587/b001/exe/main graph-explorer
104 fds (PID = 25027), command: /tmp/go-build744324587/b001/exe/main graph
104 fds (PID = 25015), command: /tmp/go-build744324587/b001/exe/main phoenix
103 fds (PID = 25214), command: /tmp/go-build744324587/b001/exe/main registry
102 fds (PID = 25092), command: /tmp/go-build744324587/b001/exe/main reva-storage-home
101 fds (PID = 25157), command: /tmp/go-build744324587/b001/exe/main reva-storage-public-link
101 fds (PID = 25082), command: /tmp/go-build744324587/b001/exe/main reva-auth-basic
101 fds (PID = 24989), command: /tmp/go-build744324587/b001/exe/main server
100 fds (PID = 25276), command: /tmp/go-build744324587/b001/exe/main reva-sharing
100 fds (PID = 25155), command: /tmp/go-build744324587/b001/exe/main reva-storage-oc-data
100 fds (PID = 25131), command: /tmp/go-build744324587/b001/exe/main reva-storage-oc
100 fds (PID = 25120), command: /tmp/go-build744324587/b001/exe/main reva-storage-eos-data
100 fds (PID = 25113), command: /tmp/go-build744324587/b001/exe/main reva-storage-eos
100 fds (PID = 25084), command: /tmp/go-build744324587/b001/exe/main reva-auth-bearer
100 fds (PID = 25081), command: /tmp/go-build744324587/b001/exe/main reva-users
  1. after 60sec
117 fds (PID = 25164), command: /tmp/go-build744324587/b001/exe/main accounts
113 fds (PID = 25065), command: /tmp/go-build744324587/b001/exe/main reva-gateway
112 fds (PID = 24999), command: /tmp/go-build744324587/b001/exe/main proxy
110 fds (PID = 25001), command: /tmp/go-build744324587/b001/exe/main settings
108 fds (PID = 25194), command: /tmp/go-build744324587/b001/exe/main api
106 fds (PID = 25175), command: /tmp/go-build744324587/b001/exe/main glauth
105 fds (PID = 25181), command: /tmp/go-build744324587/b001/exe/main konnectd
105 fds (PID = 25000), command: /tmp/go-build744324587/b001/exe/main store
104 fds (PID = 25203), command: /tmp/go-build744324587/b001/exe/main web
104 fds (PID = 25184), command: /tmp/go-build744324587/b001/exe/main thumbnails
104 fds (PID = 25042), command: /tmp/go-build744324587/b001/exe/main webdav
104 fds (PID = 25030), command: /tmp/go-build744324587/b001/exe/main ocs
104 fds (PID = 25029), command: /tmp/go-build744324587/b001/exe/main graph-explorer
104 fds (PID = 25027), command: /tmp/go-build744324587/b001/exe/main graph
104 fds (PID = 25015), command: /tmp/go-build744324587/b001/exe/main phoenix
103 fds (PID = 25214), command: /tmp/go-build744324587/b001/exe/main registry
102 fds (PID = 25092), command: /tmp/go-build744324587/b001/exe/main reva-storage-home
101 fds (PID = 25157), command: /tmp/go-build744324587/b001/exe/main reva-storage-public-link
101 fds (PID = 25082), command: /tmp/go-build744324587/b001/exe/main reva-auth-basic
101 fds (PID = 25056), command: /tmp/go-build744324587/b001/exe/main reva-frontend
101 fds (PID = 24989), command: /tmp/go-build744324587/b001/exe/main server
100 fds (PID = 25276), command: /tmp/go-build744324587/b001/exe/main reva-sharing
100 fds (PID = 25155), command: /tmp/go-build744324587/b001/exe/main reva-storage-oc-data
100 fds (PID = 25131), command: /tmp/go-build744324587/b001/exe/main reva-storage-oc
100 fds (PID = 25120), command: /tmp/go-build744324587/b001/exe/main reva-storage-eos-data
100 fds (PID = 25113), command: /tmp/go-build744324587/b001/exe/main reva-storage-eos
100 fds (PID = 25101), command: /tmp/go-build744324587/b001/exe/main reva-storage-home-data
100 fds (PID = 25084), command: /tmp/go-build744324587/b001/exe/main reva-auth-bearer
100 fds (PID = 25081), command: /tmp/go-build744324587/b001/exe/main reva-users

glauth and settings leave grpc connections open, which is intended.

@butonic
Copy link
Member

butonic commented Sep 11, 2020

@micbar ok so only eos leaks? make title more precise, because it is not ocis that leaks.

@micbar micbar changed the title ocis leaks filedescriptors EOS leaks filedescriptors Sep 11, 2020
@butonic
Copy link
Member

butonic commented Sep 11, 2020

@ishank011 @labkode do you see this file descriptor leak as well? AFAICT some xrootd processes are not killed...

@refs
Copy link
Member

refs commented Oct 22, 2020

@jnweiger cs3org/reva#1260 this PR tackles all known fd leak known so far. Changes are currently on master so we could give this a try again? :)

@butonic
Copy link
Member

butonic commented Nov 19, 2020

reopen if still an issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants