-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Profiler causes extreme memory usage during profiling. #78
Comments
its is very very tricky, collection all this data comes at a cost and we have no way of simply streaming this data to disk and then running analysis on data stored on disk. Is there any way you can simply collect less info? |
Kinda we did, it was a bit of a mess, but memory profiling acting crazy prompted me to look closer at other stuff with a manual audit, which helped me notice an allocation leak in one of the libraries we use, where the library was allocating millions of objects over and over again. |
I'm having the same issue, to the point where I can't really measure the job because I'm running out of memory... How can I reduce the amount of info collected? I'm already trying to evaluate a single class (using trace: [TargetClass]) |
This is happening to me when Im trying to measure the amount of memory consumed while reading from a stream
If I execute that code without the memory profiler, using htop I can see that the memory usage barely changes. But If I put the block inside the |
I have a similar issue, I'm trying to profile a sidekiq job trough a middleware with memory profiler (sidekiq is running only with 1 thread). When I execute the code without the profiler the memory use is 2GB aprox, with the profiler it reaches up to 28GB until is killed by the OOM. The background job is using CSV to read a cvs file line per line to generate another csv file. |
https://github.com/ruby/ruby/blob/0a630fa461a7260235842e482f682deca30172d6/ext/objspace/object_tracing.c#L445C40-L445C40 looks like it allocates a new string each time we get the allocation source file, which definitely could be an fstring (my guess is its coming from the iseq, which already should have an associated fstring for the file name?) profile with rbspy is telling me that the CPU time is being spent in |
I think another thing that might help would be an option to skip |
Using
MemoryProfiler.start
on one of our jobs, and then waiting to dumpMemoryProfile.stop&.pretty_print
took a job that uses 2gb of memory to 76gb of memory. Is there a way we can prevent this kind of memory usage? Even running memory profiler for a few short seconds causes it to jump to exponential levels.The text was updated successfully, but these errors were encountered: