-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make the default GC_INITIAL_HEAP_SIZE a lot bigger #11130
base: master
Are you sure you want to change the base?
Conversation
On Linux, we now use 80% of free memory. If it's free, we may as well use it, and hopefully avoid some expensive stop-the-world GC cycles. On macOS, we just assume that 25% of memory is free, so we use 80% of 25% = 20% of memory by default.
cd0fba6
to
f595d7d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
80% and 8 GB are too aggressive.
The memory may be available at process start, but we are likely to create a bunch of builds that may need that memory, and we're still not releasing it properly.
Virtual memory only helps a little bit (not actually assigning real pages at first), but the problem occurs when a large evaluation happened and GC is not run because the limit wasn't hit. (GHC allocates 1 TB to minimize some certain overhead, but it does garbage collect when the heap grows. IIUC that's not how bdwgc behaves for this setting)Also consider that multiple commands may run simultaneously. For instance, I sometimes refresh my dev shell in the terminal while the direnv integration in my editor is also reloading it (it's a mechanism for finding LSPs).
If we could resolve that issue, we can be a little bit more aggressive, but maybe something more like 30% and 4GB?
-
Please keep the original default in libexpr and increase the initial heap in the CLI.
For some applications, large evaluations are not the main activity, and these defaults would constitute basically a massive memory leak.
This comment was marked as resolved.
This comment was marked as resolved.
auto i = fields.find("MemAvailable"); | ||
if (i == fields.end()) | ||
i = fields.find("MemFree"); | ||
if (i != fields.end()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this number might be misleading on zfs (which is quite popular with NixOS user) as zfs ARC allocations are not accounted in page cache. There was a trick though that htop uses to get a more correct number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Get total memory in kB
total_mem=$(grep MemTotal /proc/meminfo | awk '{print $2}')
# Get free memory in kB
free_mem=$(grep MemFree /proc/meminfo | awk '{print $2}')
# Get available memory in kB
avail_mem=$(grep MemAvailable /proc/meminfo | awk '{print $2}')
# Get ARC memory in bytes from arcstats
arc_mem=$(awk '/^size/ {print $3}' /proc/spl/kstat/zfs/arcstats)
# Convert total memory to bytes for consistency
total_mem=$((total_mem * 1024))
# Calculate used memory excluding ARC (in bytes)
used_mem_excluding_arc=$((total_mem - free_mem * 1024 - avail_mem * 1024 + arc_mem))
# Convert result to a more readable format (kB)
used_mem_excluding_arc_kb=$((used_mem_excluding_arc / 1024))
# Print the result
echo "Used memory excluding ARC: $used_mem_excluding_arc_kb kB"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is an example arcstats: https://gist.github.com/Mic92/b2d37f5f8ed7f93dff432030ec8f1c67
Motivation
On Linux, we now use 80% of free memory. If it's free, we may as well use it, and hopefully avoid some expensive stop-the-world GC cycles. On macOS, we just assume that 25% of memory is free, so we use 80% of 25% = 20% of memory by default.
This gives a fairly significant speedup, e.g.
nix search nixpkgs ^
went from 14.9s to 10.9s (with a 1.1 GB increase in peak memory use) on my machine, andnix eval
of my NixOS config went from 6.7s to 5.8s (421 MB increase).Cherry-picked from #10938.
Context
Priorities and Process
Add 👍 to pull requests you find important.
The Nix maintainer team uses a GitHub project board to schedule and track reviews.