-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible to implement ratelimiting #2489
Comments
Just thinking out loud, to implement rate limiting you'd also have to manage the peer count.. if you restrict the bandwidth to say 10KB/sec you'd probably have to limit your peers to say 10 to give them at least 1KB/sec each. |
rate limiting wouldnt be too difficult to do, but i'm not sure the effect it would have on overall performance yet. My thoughts are that things will become verrrrry slow if we start rate limiting things.. If you still want to go ahead and try it, take a look at the metrics stuff in libp2p https://github.com/ipfs/go-libp2p/blob/master/p2p/metrics/conn/conn.go Every connection in ipfs gets wrapped in one of those, and its where we record our (albeit seeminly incorrect) bandwidth stats. You could very easily implement your own bandwidth recorder type object that does token bucket rate limiting, and pass that into the swarm constructor here: https://github.com/ipfs/go-ipfs/blob/master/core/core.go#L140 |
Thanks for the tips. I guess one defence of rate limiting is just about every bittorrent client supports rate limiting, but the network still performs downloads at peak speeds with enough nodes. You mentioned the bandwidth stats are seemingly wrong, how do you view those stats? |
you can run |
Thanks, yeah it looks like the accounting is under stating the bandwidth per/sec by at least 50% |
Yeah, i'm not sure what the discrepancy is... If you want to take a look at it, please do. I'd love a second pair of eyes on it all |
I've been playing around with replicating the wire protocol and found that go-multistream is sending message size prefixes and line feeds "\n" as separate packets.. you can see the code here https://github.com/whyrusleeping/go-multistream/blob/master/multistream.go#L37 and I can confirm that wireshark is showing multiple packets coming in with 1 byte of data. I wonder if this is part of the cause for incorrect bandwidth usage stats? The overheads for sending a packet with 1 byte instead of say 1400 bytes would cause quite a big difference. Traffic monitoring programs show the total bytes including packet overheads while go-ipfs is probably just reporting the data bytes.. |
@slothbag I just now saw this comment. Thats very interesting... thank you for pointing that out |
Wow. That's certainly possible, given how the go net library works and how For a lot of this network code, aggregating every 1ms would do well in most Sadly, an io.RWC isn't everything you ever hoped to do with a network pipe. I bet Someday, people will implement this as an ML based io scheduler that On Tue, Aug 9, 2016 at 12:29 Jeromy Johnson notifications@github.com
|
I actually prefer go's method of doing things over how C does it. In C you're never really sure when you need to flush out your file descriptors, or when you need to manually buffer things up. Go always pushes writes through by default, there it no buffering on file descriptors (unlike the |
I've ran a bit of statistical analysis on IPFS packets capture (only data packets, PUSH flag on, no ACK only). Histograms: |
By default Go uses NoDelay on TCP connections, I think disabling it could result in major savings on number of packets (and thus overhead) we send. @whyrusleeping is that a good idea? NoDelay is buried deep (not exposed from go-reuseport, then we would have to pass option through multiaddr) and I have no idea how to resurface it. |
I really don't think NoDelay is a good solution. It will result it weird random hangs and slowdowns for us. For instance, the final packet of a write the spans multiple packets will be held back until an ACK is received from the other side. This will incur hundreds of milliseconds of latency in simple RPCs. This will be especially painful in things like dht requests, where the responses can easily be larger than a single packet. |
@whyrusleeping made awesome changes to our netstack libriaries that should reduce number of packets sent, I will repeat my tests. Also I will open separate issue for just this metric as packet size it is something we should aim to max out in a long run to increase bandwidth usage efficiency. |
I have an urgent need to have a rate limited ipfs node.. perhaps implementing something like this https://github.com/juju/ratelimit within IPFS
Would it be possible to have a user configurable option in ipfs to enact this rate limit logic, if not activated carry on as it currently does?
Can one of the devs provide an estimate on how hard this would be to implement (Time/Cost)? Perhaps I can organise a bounty for it to get it bumped up the priority list :)
Much appreciated.
The text was updated successfully, but these errors were encountered: