Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Satisfy ResourceManager debug needs with smaller code footprint #9621

Closed
Tracked by #9650
BigLep opened this issue Jan 31, 2023 · 2 comments · Fixed by #9680
Closed
Tracked by #9650

Satisfy ResourceManager debug needs with smaller code footprint #9621

BigLep opened this issue Jan 31, 2023 · 2 comments · Fixed by #9680
Labels
topic/resource-manager Issues related to Swarm.ResourceMgr (resource manager)

Comments

@BigLep
Copy link
Contributor

BigLep commented Jan 31, 2023

Done Criteria

It is possible for a user to follow docs at https://github.com/ipfs/kubo/blob/master/docs/libp2p-resource-management.md to understand:

  1. What the limits are for a running Kubo node
  2. What the current utilization is against those limits
  3. What limits are closest to being fully utilized

Why Important

We have a lot of code right now in our stat/limit commands for printing and analyzing limits. The problems with it are:

  1. Makes keeping up-to-date with libp2p refactors (like what's happening in WIP: Implement new ResourceManager partial limits config object. #9612) more difficult
  2. We aren't flexible in highlighting the information that we want (e.g., most important scopes first, limits that are most highly utilized only)

Notes

The current proposal is to have two commands from Kubo:

  1. Get the limits of a running Kubo node. We should ideally get the root object from the go-libp2p resource manager/accountant and print it out as JSON
  2. Get the current accounting/utilization of a running Kubo node. We should ideally get the root object from the go-libp2p resource manager/accountant and print it out as JSON.

With these sets of information, it's possible to run jq/awk/etc. commands to give output like:

Scope Limit Name Limit Value Current Resource Accounting Utilization Percentage
System ConnsInbound 100 50 50%
System ConnsOutbound 200 10 5%
Transient ConnsInbound 10 4 40%
Transient ConnsOutbound 50 10 20%

This is really the key summary users or maintainers need for helping debug.

This assumes we're also doing the work of removing Swarm.ResourceMgr.Limits in favor of limits.json in go-libp2p: #9603 . With that, we are free from the concern of our commands printing out values that can be copy/pasted into Kubo config.

Doing this work should allow us to drop other things on the backlog:

In addition to code changes, this work will involve:

  1. Updates to https://github.com/ipfs/kubo/blob/master/docs/libp2p-resource-management.md
  2. Updates to https://github.com/ipfs/kubo/blob/master/docs/config.md#swarmresourcemgr
  3. Ensuring we longer have the problems listed in ipfs swarm limit all: returns empty "" Service and Protocol name #9577
@ajnavarro
Copy link
Member

ajnavarro commented Feb 1, 2023

There are some inconsistencies in libp2p stats output and limits output (different fields):
Stats:

ipfs swarm stats | jq .System
{
  "Memory": 13893632,
  "NumConnsInbound": 0,
  "NumConnsOutbound": 1295,
  "NumFD": 63,
  "NumStreamsInbound": 15,
  "NumStreamsOutbound": 186
}

Limits:

ipfs swarm limit | jq .System
{
  "Conns": 1000000000,
  "ConnsInbound": 16212,
  "ConnsOutbound": 1000000000,
  "FD": 524288,
  "Memory": 17000000000,
  "Streams": 1000000000,
  "StreamsInbound": 1000000000,
  "StreamsOutbound": 1000000000
}

Checking if that is intended.

@ajnavarro
Copy link
Member

Related PR #9623

I added a new command with the table view because was difficult to create using two JSON outputs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic/resource-manager Issues related to Swarm.ResourceMgr (resource manager)
Projects
No open projects
Archived in project
Development

Successfully merging a pull request may close this issue.

3 participants