Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Serv command consumes all server resources #4450

Closed
2 tasks done
xor-gate opened this issue Jul 16, 2018 · 53 comments · Fixed by #5282
Closed
2 tasks done

Serv command consumes all server resources #4450

xor-gate opened this issue Jul 16, 2018 · 53 comments · Fixed by #5282
Labels
issue/critical This issue should be fixed ASAP. If it is a PR, the PR should be merged ASAP type/miscellaneous
Milestone

Comments

@xor-gate
Copy link
Contributor

xor-gate commented Jul 16, 2018

Description

We use jenkins to poll for changes in git. It uses openssh for access and gitea spawns a serv command per poll. There are many concurrent connections (jenkins set to 10 concurrent SCM pollers) and the server memory and CPU is blown up. It doesn't happen very often but I'm not sure its a deadlock/racecondition in Gitea. We are going to upgrade to 1.4.3 but according to the changelog it should probably not fix the problem for us.

If you need some more information we hopefully could provide it as it only happens once a day or so.

Screenshots

knipsel

@xor-gate
Copy link
Contributor Author

We have upgraded to v1.4.3. If it happens again we try to come up with some more details.

@xor-gate
Copy link
Contributor Author

Still seems to be an issues multiple times a day with v1.4.3.

@xor-gate
Copy link
Contributor Author

I think it is because we have only assigned 16GB memory and some swap. Every connection uses virtual ~1500M memory and ~500M residence memory. This is a insane amount of memory.

@lafriks
Copy link
Member

lafriks commented Jul 17, 2018

Is there anything useful in gitea.log at such times?

@xor-gate
Copy link
Contributor Author

I have had a look at the logfiles but nothing special is in there. I will keep an eye on it. But what about dumping debug pprof to disk for the serv command to see the actual memory usage hotspots?

@xor-gate
Copy link
Contributor Author

xor-gate commented Jul 27, 2018

Hi guys, I have no exact explanation but the problem has not be seen after we updated Jenkins. Still i think it is good to have the ability create pprof dumps per new process of serv command to figure out the memory usage. Because OOM-killer (linux) can come by and reap other processes and make the server unusable. In the cases where SSH is requested with a high-rate (big deployments).

@lafriks
Copy link
Member

lafriks commented Jul 27, 2018

Currently it is hard to find out cause for this without ability to reproduce such behaviour

@xor-gate
Copy link
Contributor Author

I totaly understand, is it a possibility I add a feature for pprof memory dumps to disk on exit with special flag? Because currently there is only a http pprof endpoint which can not be used with short lived serve executions under SSH. Then we can profile CPU and Heap and make some better conclusions than this weird "smoking server" problem.

https://github.com/go-gitea/gitea/search?q=pprof&unscoped_q=pprof

@lafriks
Copy link
Member

lafriks commented Jul 28, 2018

Sure, it could help to have such option to pprof this

@xor-gate
Copy link
Contributor Author

If no one will work on this I can try to implement dumping pprof to disk in the upcoming weeks?

@lafriks
Copy link
Member

lafriks commented Jul 30, 2018

Yes, go ahead

@xor-gate
Copy link
Contributor Author

@lafriks initial working dumping is in pr #4560

@micw
Copy link
Contributor

micw commented Aug 1, 2018

I have the same issue - from time to time, hundreds of "gitea serv" processes eat all my system memory. How can I help to track it down?

@xor-gate
Copy link
Contributor Author

xor-gate commented Aug 1, 2018

Hi @micw I started working on profiling this problem in feature #4560. Currently we can not trace exactly what is going on so I added pprof dumps to disk. More can be read at https://blog.golang.org/profiling-go-programs. I am using the https://github.com/google/pprof web interface visualizer.

@xor-gate
Copy link
Contributor Author

xor-gate commented Aug 1, 2018

When #4560 is merged (and released) we could share and offline analyze the pprof files and investigate further.

@xor-gate xor-gate changed the title Gitea serv openssh key process burns down the server with concurrent jenkins SCM poller Serv command consumes all server resources Aug 1, 2018
@chihchi
Copy link

chihchi commented Aug 10, 2018

Hi all, I use gitea 1.3.2 have the same issue almost used CPU resource on AWS EC2 m4.medium,
Suggests me upgrade to 1.4.3?

platform 1195 0.4 2.0 972704 77268 ? Ssl Jul24 103:26 /home/platform/gitea/gitea web
platform 10094 23.6 13.5 707236 523232 ? Rsl 13:58 0:02 /home/platform/gitea/gitea serv key-64 --config=/home/platform/gitea/custom/conf/app.ini
platform 10135 23.5 20.3 1209440 782640 ? Rsl 13:58 0:01 /home/platform/gitea/gitea serv key-64 --config=/home/platform/gitea/custom/conf/app.ini
platform 10176 26.7 13.6 707236 524916 ? Ssl 13:58 0:01 /home/platform/gitea/gitea serv key-64 --config=/home/platform/gitea/custom/conf/app.ini
platform 10217 20.5 13.5 641700 522992 ? Ssl 13:58 0:00 /home/platform/gitea/gitea serv key-64 --config=/home/platform/gitea/custom/conf/app.ini

2018-08-08 14 57 47

@micw
Copy link
Contributor

micw commented Aug 10, 2018

@chihchi This issue causes high memory/io consumption and occurs on 1.4.3. So I'm not sure that your's is the same issue. If so, upgrading to 1.4.3 won't help.

@micw
Copy link
Contributor

micw commented Aug 10, 2018

The issues occured several times this week. It looks like the situation occurs if more "serv" processes are running than the server can handle. If no more requests follow, the server will return to normal after a while. But if clients keep querying git (like jenkins does), more and more processes are launched which consume all available ressources (mainly memory).

Basically it's expected behaviour if clients consume more ressources than the server can provide (kind of DoS). But in this case, the amount of client activity is quite small (just a few jenkins jobs polling for changes every minute) - that should be handled without problems.

As a workaround, I moved our git to a better server (fast SSD Storage, 64 GB Ram, 12 CPU cores) - the issue did not occur again since that.

@lafriks
Copy link
Member

lafriks commented Aug 10, 2018

@chihchi you should try upgrading anyway as even if there has no been direct change to fix that but could happen that by fixing other bug this could also could have been fixed

@micw
Copy link
Contributor

micw commented Aug 10, 2018

I need to update my experience - the issue also occurs on the bigger server today - it can handle the load better but already has a load of >60. Since the system is still responsive, I can do some analysis:

ps fax| grep "gitea serv" | wc -l

-> only 40 gitea serv process are running

ps fax| grep "gitea serv" | awk '{ print $1 }' | sort -n > list1
(wait a while)
ps fax| grep "gitea serv" | awk '{ print $1 }' | sort -n > list2
comm -12 list1 list2

only a few processes (~15) last longer and consume much ressources (some consume 1 GB or more memory). After a while (~10 minutes) all those processes are finished and all returned to normal.

@xor-gate
Copy link
Contributor Author

xor-gate commented Aug 10, 2018

@lafriks is it possible my pprof dump feature could be backported to 1.4.x ?

@lafriks
Copy link
Member

lafriks commented Aug 11, 2018

@xor-gate not for 1.4.x as 1.5.0 was just released. We don't usually backport features to minor releases

@xor-gate
Copy link
Contributor Author

I understand

@chihchi
Copy link

chihchi commented Aug 15, 2018

@lafriks Hi all, I upgraded to 1.4.3, it's seem to resolve high CPU loading problem,
I will monitor it continue.

2018-08-15 13 48 18

@micw
Copy link
Contributor

micw commented Sep 3, 2018

Just want to point out that @chihchi's issue was a different one (completely different, much older version). The issue described in this ticket still exists.

@xor-gate
Copy link
Contributor Author

xor-gate commented Sep 3, 2018

@micw if you update to 1.5.x it is possible to enable profiling

@micw
Copy link
Contributor

micw commented Sep 3, 2018

Upgrade is scheduled to wednesday. But I have no Idea, how to enable profiling and how to profile. So please let me know how I can provide information to you.

@micw
Copy link
Contributor

micw commented Nov 2, 2018

I upgraded to 1.5.3, the problem still exists and gets worse. I have a bunch of git repos and a few (~30) jenkins jobs polling on it. This causes a fat server (10 cpu cores, 60gb of ram, fully ssd) to overload.

Here's how top looks like after a few minutes, almost all of the 60 gb of memory are consumed:

 PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                   
   89 root      20   0       0      0      0 S  72.8  0.0   1:48.55 kswapd0                                                                                                                   
 6458 root      20   0   83616  29252      0 D  54.6  0.1   3:31.26 traefik                                                                                                                   
12528 1000      20   0 2805424 1.784g      0 D  53.4  3.6   0:24.38 gitea                                                                                                                     
 8651 1000      20   0 3724112 2.653g      0 S  53.1  5.4   1:57.13 gitea                                                                                                                     
10997 1000      20   0 3725004 1.982g      0 S  51.0  4.0   0:22.77 gitea                                                                                                                     
12036 1000      20   0 3725196 2.620g      0 R  38.5  5.3   0:24.64 gitea                                                                                                                     
12608 1000      20   0 3723700 2.035g      0 R  36.3  4.1   0:27.05 gitea                                                                                                                     
10209 1000      20   0 2805488 1.736g      0 R  33.9  3.5   0:35.03 gitea                                                                                                                     
 5442 root      20   0   35008  21812      0 S  32.5  0.0   1:25.07 rancher-metadat                                                                                                           
11904 1000      20   0 3723884 2.243g      0 S  32.0  4.6   0:11.30 gitea                                                                                                                     
 9649 1000      20   0 3723792 2.660g      0 S  27.6  5.4   1:27.40 gitea                                                                                                                     
11315 1000      20   0 2804820 1.122g      0 R  27.6  2.3   0:25.93 gitea                                                                                                                     
12274 1000      20   0 2805260 1.628g      0 R  27.2  3.3   0:26.52 gitea                                                                                                                     
11895 1000      20   0 2805352 1.733g      0 R  26.2  3.5   0:12.60 gitea                                                                                                                     
 6224 systemd+  20   0  137772  23580      0 S  24.0  0.0   0:35.00 consul                                                                                                                    
 5912 root      20   0  916660  16956      0 S  22.8  0.0   1:00.51 plugin-manager                                                                                                            
  694 root      20   0 2432388  55960      0 S  22.4  0.1   3:28.63 dockerd                                                                                                                   
12016 1000      20   0 3723856 2.660g      0 S  22.1  5.4   0:11.34 gitea                                                                                                                     
 3481 root      20   0  771312  16132      0 S  21.2  0.0   0:49.00 agent                                                                                                                     
13409 1000      20   0  968008 472448      0 R  20.0  0.9   0:02.36 gitea                                                                                                                     
13574 1000      20   0  967916 173440      0 D  18.3  0.3   0:00.76 gitea                                                                                                                     
13056 1000      20   0  968008 458040      0 R  18.0  0.9   0:02.80 gitea                                                                                                                     
 9711 1000      20   0 3723508 1.797g      0 S  16.8  3.7   1:14.19 gitea                                                                                                                     
13140 1000      20   0  969412 462064      0 R  16.8  0.9   0:01.87 gitea                                                                                                                     
13134 1000      20   0  969596 382320      0 D  16.1  0.7   0:02.61 gitea                                                                                                                     
13404 1000      20   0  967980 118044      0 D  15.1  0.2   0:02.01 gitea                                                                                                                     
12467 1000      20   0  968008 923588      0 R  14.9  1.8   0:04.36 gitea                                                                                                                     
12442 1000      20   0  968356 923724      0 R  14.7  1.8   0:04.68 gitea                                                                                                                     
 8664 1000      20   0 2228884 2.083g      0 S  13.9  4.2   1:51.24 gitea                                                                                                                     
13509 1000      20   0   49484   5120      0 R  13.0  0.0   0:01.05 gitea                                                                                                                     
12371 1000      20   0  968356 615704      0 S  12.0  1.2   0:04.23 gitea                                                                                                                     
13138 1000      20   0  968100 447456      0 R  12.0  0.9   0:02.70 gitea                                                                                                                     
 9907 1000      20   0 3723856 2.262g      0 S  11.8  4.6   1:58.84 gitea                                                                                                                     
12661 1000      20   0  967752 916976      0 R  11.3  1.8   0:03.79 gitea                                                                                                                     
13359 1000      20   0  968100 415048      0 R  11.3  0.8   0:02.02 gitea                                                                                                                     
13581 1000      20   0   49576   5160      0 R  11.3  0.0   0:00.47 gitea                                                                                                                     
12891 1000      20   0  968192 923508      0 R  11.1  1.8   0:03.08 gitea                                                                                                                     
 6556 root      20   0  890824  16828      0 S  10.8  0.0   0:07.86 dockerd                                                                                                                   
13496 1000      20   0  967908  99616      0 R  10.1  0.2   0:01.62 gitea                                                                                                                     
12941 1000      20   0 1886768 924720    856 R   9.9  1.8   0:02.93 gitea                                                                                                                     
13124 1000      20   0  968284 476140      0 R   9.9  0.9   0:02.41 gitea                                                                                                                     
12636 1000      20   0  968540 923748      0 S   9.1  1.8   0:03.26 gitea                                                                                                                     
13057 1000      20   0  969596 383820      0 R   8.9  0.7   0:02.65 gitea                                                                                                                     
12529 1000      20   0  968448 923612      0 R   8.7  1.8   0:22.79 gitea                                                                                                                     
13524 1000      20   0  966516  33820      0 D   8.7  0.1   0:01.04 gitea                                                                                                                     

This happens serveral times a day.

@micw
Copy link
Contributor

micw commented Nov 6, 2018

@lunny @xor-gate Would it be possible to create a 1.5.x docker imager that contains this profiling patch (or can I easily patch an existing image)? I'd really like to track this down, maybe we can fix this issue with the 1.6 release.

@adelowo
Copy link
Member

adelowo commented Nov 6, 2018

@micw You can try out an RC for 1.6.0 which contains this fix. I am currently running an RC in production anyways.

https://github.com/go-gitea/gitea/releases/tag/v1.6.0-rc2

@xor-gate
Copy link
Contributor Author

xor-gate commented Nov 6, 2018

We are not backporting the profiling patch to 1.5.x as 1.6.0 will be release soon.

@micw
Copy link
Contributor

micw commented Nov 6, 2018

Hi,

I ran the profiling on 1.6.0-rc2 but I cannot interpret the results.
What I have done:

  • waited until there's low load on the server
  • enabled profiling in config
  • waited until there was a peek of many gitea processes
  • immediately disabled profiling in config
  • waited until the last process of the peek terminated (~2 minutes)
  • looked into the last written files (those are from the processes which ran for ~2 minutes)

Here's the result:
pprof.zip

CPU profiling says that the process ran only 16 ms which is definitely wrong.

Any ideas?

@xor-gate
Copy link
Contributor Author

xor-gate commented Nov 6, 2018

I think you are analyzing the incorrect profiles. I have loaded the profiles into pprof (downloaded https://github.com/go-gitea/gitea/releases/download/v1.6.0-rc2/gitea-1.6.0-rc2-linux-amd64 in $PWD).

PPROF_BINARY_PATH=$PWD pprof -http 0.0.0.0:1337 cpuprofile_eh_183397202
PPROF_BINARY_PATH=$PWD pprof -http 0.0.0.0:1337 memprofile_eh_520066697

https://github.com/google/pprof

See screenshots:

knipsel
knipsel1

@micw
Copy link
Contributor

micw commented Nov 6, 2018

All profiles that where written look that like. I did a short pprof -text on each file and grepped for duration. That one I uploaded had the absolutely longest duration.

If you look into the memory profile under alloc_space, you see that almost 3 GB are allocated. There are other profilings with >10 GB allocated.

May it be that this is the cause of the enormous load? And may it be that memory allocation (which is done by the OS, not by the go code) is not part of the profiling?

@micw
Copy link
Contributor

micw commented Nov 6, 2018

I'm not that familar with golang but for me it reads at https://github.com/go-gitea/gitea/blob/master/modules/log/file.go#L147 that teh whole logfile is read into memory during gitea serv

@micw
Copy link
Contributor

micw commented Nov 6, 2018

I wiped the serv.log and the problem is gone...

@micw
Copy link
Contributor

micw commented Nov 6, 2018

Reviewed the code. Indeed the whole logfile is read. The purpose is log rotation. This should not be done in the logger, especially not if multiple processes log to the same file.

@lafriks
Copy link
Member

lafriks commented Nov 6, 2018

Wow, that is crazy. Good find

@xor-gate
Copy link
Contributor Author

xor-gate commented Nov 6, 2018

It was a long story, but finally 🎉

@micw
Copy link
Contributor

micw commented Nov 6, 2018

I'd say the story ends when there is a patch ;-)

I suggest the following changes:

  1. Don't read the whole file into memory just to count the lines. Stream it. This solves the memory issue but the high I/O remains. But it's a small change that can be applied ad-hoc.

  2. Re-think log rotation
    Consider to move log rotation it out of gitea's scope and use "logrotate" or whatever.

  3. (if you decide to keep log rotation in gitea) Fix log rotation

  • remove the "maximum lines per log file" feature. This is an anti-pattern because it requires parsing of the whole file. Use the file size and/or age (ctime) instead.
  • remove the implicit log rotation. It does not play well in multi-process environments.
  • run a service task in the long running gitea server process that rotates all log files. Don't do log rotation in "serv" processes.

@adelowo
Copy link
Member

adelowo commented Nov 6, 2018

Oh boy, what a find. I agree file size should be used

@adelowo
Copy link
Member

adelowo commented Nov 6, 2018

Or move log rotation totally out of gitea's responsibility

@lafriks
Copy link
Member

lafriks commented Nov 6, 2018

I have submitted PR #5282 to remove logfile max line count functionality as max file size should be more than enough for file rotation.

@lafriks lafriks added this to the 1.6.0 milestone Nov 6, 2018
@lafriks lafriks added the issue/critical This issue should be fixed ASAP. If it is a PR, the PR should be merged ASAP label Nov 6, 2018
@xor-gate
Copy link
Contributor Author

xor-gate commented Nov 6, 2018

Yeah I think this option is also not "normal": https://manpages.debian.org/jessie/logrotate/logrotate.8.en.html

@micw
Copy link
Contributor

micw commented Nov 6, 2018

@xor-gate absolutely, I never saw a line based log rotation before. Removing the "feature" should cause no issues. I can barely imagine any use case where line based rotation is required ;)

@micw
Copy link
Contributor

micw commented Nov 6, 2018

I think we can close this issue when #5282 is merged. I'll create a new one to rethink/rework log rotation.

@xor-gate
Copy link
Contributor Author

xor-gate commented Nov 6, 2018

As @lafriks mentions the size would be enough to keep things within reasonable proportions without blowing up the storage. Thanks all for this teamwork!

@micw
Copy link
Contributor

micw commented Nov 6, 2018

I created #5285 to track the rest of the log rotation that does not affect performance/memory usage.

@go-gitea go-gitea locked and limited conversation to collaborators Nov 24, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
issue/critical This issue should be fixed ASAP. If it is a PR, the PR should be merged ASAP type/miscellaneous
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants