Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in synchrotron worker #17791

Open
kosmos opened this issue Oct 4, 2024 · 8 comments
Open

Memory leak in synchrotron worker #17791

kosmos opened this issue Oct 4, 2024 · 8 comments

Comments

@kosmos
Copy link

kosmos commented Oct 4, 2024

Description

I have the following synchrotron worker configuration:

# Sync initial/normal
location ~ ^/_matrix/client/(r0|v3)/sync$ {
  proxy_pass https://$sync;
}


# Synchrotron
location ~ ^/_matrix/client/(api/v1|r0|v3)/events$ {
  proxy_pass https://synapse_sync;
}


# Initial_sync
location ~ ^/_matrix/client/(api/v1|r0|v3)/initialSync$ {
  proxy_pass https://synapse_initial_sync;
}

location ~ ^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$ {
  proxy_pass https://synapse_initial_sync;
}

And the following cache settings for this worker:

event_cache_size: 150K
caches:
  global_factor: 1
  expire_caches: true
  cache_entry_ttl: 30m
  sync_response_cache_duration: 2m
  cache_autotuning:
    max_cache_memory_usage: 2048M
    target_cache_memory_usage: 1792M
    min_cache_ttl: 1m
  per_cache_factors:
    stateGroupCache: 1
    stateGroupMembersCache: 5
    get_users_in_room: 20
    get_users_who_share_room_with_user: 10
    _get_linearized_receipts_for_room: 20
    get_presence_list_observers_accepted: 5
    get_user_by_access_token: 2
    get_user_filter: 2
    is_support_user: 2
    state_cache: 5
    get_current_state_ids: 5
    get_forgotten_rooms_for_user: 5

All other worker types work without problems, but it is a memory leak in synchrotrons, which leads to the exhaustion of all memory.

It seems that the cache_autotuning settings are not working. The environment variable PYTHONMALLOC=malloc is set at the operating system level.

According to my impressions, the problem became relevant after updating to 1.114 of Synapse and remains relevant in 1.116.

Steps to reproduce

To reproduce the problem, you need a homeserver with a heavy load and dedicated synchrotron workers.

Homeserver

Synapse 1.116.0

Synapse Version

Synapse 1.116.0

Installation Method

Docker (matrixdotorg/synapse)

Database

PostgreSQL

Workers

Multiple workers

Platform

Configuration

No response

Relevant log output

-

Anything else that would be useful to know?

No response

@anoadragon453
Copy link
Member

Hi @kosmos, thanks for filing an issue. Can I just double-check that you're using jemalloc in your configuration? Doing so is required for the cache_autotuning option.

@kosmos
Copy link
Author

kosmos commented Oct 7, 2024

@anoadragon453 I'm using the official docker image, and this feature (jemalloc) is enabled by default.

@anoadragon453
Copy link
Member

@kosmos Did you upgrade from 1.1130 before being on 1.114.0? I don't see any changes that are particularly related to caches in 1.114.0.

Around the time of the memory ballooning, are you seeing lots of initial syncs at once? Those requests are known to be memory intensive, especially for users with a large amount of rooms.

Do you have monitoring set up for your Synapse instance? If so, could you have a look at the Caches -> Cache Size graph around the time of the memory ballooning to see what cache might be overinflating? I'm also happy to poke through your Grafana metrics if you can make them privately or publicly available. Feel free to DM me at @andrewm:element.io if you'd like to share them via that route.

@kosmos
Copy link
Author

kosmos commented Oct 9, 2024

@anoadragon453 Our update history was as follows: 113->114->116. I will write to you in private messages.

@anoadragon453
Copy link
Member

@kosmos and I exchanged some DMs to try and get to the bottom of this. The conclusion was that @kosmos does have jemalloc enabled in their deployment, yet the memory-based cache eviction doesn't seem to be kicking in. They had no mention of this log line in their logs:

logger.info("Begin memory-based cache eviction.")

This is odd, as the memory of the process (2500M) is going over their configured maximum (2048M).

However, only part of the total memory allocations of the application are being carried out by jemalloc. Over half(!) are still being performed by the built-in python allocator. Looking at the stats @kosmos provided me, jemalloc's stats showed that only ~700M of RAM was allocated at the time of the OOM, with 1.75G being allocated otherwise.

This resulted in the cache autotuning not kicking in. The real question is what is actually taking up all that memory, and why isn't it being allocated through jemalloc?

Native code (Synapse contains rust code) won't use jemalloc to allocate memory, so this could be one explanation. Native code in imports that use C/C++ extensions could also be a contender. Short of getting out a python memory profiler however, it's hard to say.

As a workaround, I'd recommend reducing your max_cache_memory_usage and target_cache_memory_usage values by 1/2 to allow the autotuning to kick in.

@kosmos
Copy link
Author

kosmos commented Oct 9, 2024

This approach won't work because the jmalloc memory graph is not at its maximum at the moment of physical memory growth. That is, there is no criterion by which we can understand when to clear the memory.

In addition, if it's not the memory allocated by jmalloc that is growing, then clearing the allocated jmalloc memory won't save me.

@anoadragon453
Copy link
Member

Good point, yes. I think the next step is tracking down what exactly is taking up the memory in your homeserver when it OOM's, and from there figuring out why it's not being allocated by jemalloc.

Alternatively, having the ability to discard caches based on total application memory versus only jemalloc-allocated memory could work.

@kosmos
Copy link
Author

kosmos commented Oct 10, 2024

I still feel that there is a memory leak somewhere in the native libs, and it is important to find it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants