-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[security] server should not trust client-provided content-length #7208
Comments
I think you should be using As for the server detecting the wrong content-size, I'm not too familiar with the implementation details, but how would you tell the difference between a client sending the wrong size and a client that is part way through sending a large request? Once the client disconnects, we'll abort. I don't think there's much more we can do, or would be expected to do. |
I've addressed this
I have even demonstrated 8 byte attack
in this case the socket and tmp_file will be open forever an evil client with such requests will deny the service because the server can't open a new socket for other legit client |
the problem here is not that the request allocated 100MB at once, the problem is that they will never be freed |
Until the connection is dropped.
That is an issue with all servers, as far as I'm aware, and not really related to content-size (what if a client sends the correct content-size, but then stops half way forever?). Standard DoS/DDoS prevention methods should be used, including increasing the number of open connections the OS supports, limiting the number of connections per IP, using a service like Cloudflare etc. I don't see anything we are expected to do here. |
which is enough time to reach max open file and deny the service
no, it's properly handled in many libraries and frameworks client should never be trusted, malicious/misbehaving clients should be properly handled ex. uploading a large file while entering an elevator or a car enters blow a bridge, in php, it's UPLOAD_ERR_PARTIAL https://www.php.net/manual/en/features.file-upload.errors.php python bottle handles this well too, in aiohttp there seems to be checks for EOF but it does not seem to work in all cases.
floods are one kind of DoS, not the only kind. DoS vulnerability is a shortcut the consumes server resources without even the need to flood the server for example express CVE-2022-24999
no need for flood |
As mentioned above, if the connection is closed it should be aborted. Can you retest with 3.8.4, there was a known issue when using SSL. If it's still not aborting on 3.8.4, then figuring out why would be useful. |
no, I'm not using SSL I'm facing something strange
the nmap ncat did not respond at all, even with simpler request
while python bottle does respond
one respond and the other does not respond, only in aiohttp server |
nmap's ncat not getting any response is a mystery to be solved mean while here are what bsd's netcat gives me
the output was
and the trace on the server
good news it raised exception, although it's not what I expected. changing content-length to 1000 |
I believe that exception pretty much means the llhttp parser failed to parse the headers. This may be because we have strict mode on (#5269 (comment)), and we'd get something different if that were changed to lenient parsing. |
I'm not complaining about the exception, I want more of it. I want a similar exception in the other case.
this should not block forever, I believe malformed request should be detected when boundary is reached for some reason it does not detect EOF |
btw I suspect that nmap's ncat client is broken for a similar reason which is not detecting the close of input
|
Yep, but means it'll need retesting when that parser is changed, and should hopefully produce a more understandable exception.
Yep, that doesn't seem ideal. But, the connection is still open, right? It does abort when the connection is closed? In terms of security, as long as it aborts when the connection is closed, it's not going to get much safer. i.e. If the connection is actually from an attacker, they just won't send EOF or anything that could be used to indicate they won't send any more data, they'll just stop mid-request and hold the connection open. So, if I'm understanding that correctly, this is more of an optimisation for badly implemented clients, rather than a security concern with malicious clients. Still worth looking at if anybody has the time though. |
no, I still see it as a security risk what need to be changed? handle the two cases, reaching EOF before content-length or reaching content-length before EOF maybe this is reported by
and maybe it's ignored somewhere and something need to be made to make sure nmap's ncat work as all other web servers. https://man7.org/linux/man-pages/man2/select.2.html |
As I keep saying, an attacker can just send stop sending data without sending EOF. So, if you you can DoS a service now, you'll still be able to DoS a service after making these changes... Unless I am misunderstanding something, resolving this issue doesn't change the feasibility of a deliberate DoS attack.
Interesting, I'm not famililar with the low level details of sockets in asyncio, so I'm not sure if that's something that is being handled or not. Although the description of exceptional events doesn't sound like anything to do with EOF (it sounds like it's only dealing with TCP errors, while EOF would be handled at a higher level). |
We had a security scan flag aiohttp for this issue, so it would be good to have this resolved.
In doing this, you're trusting the content instead of the content-length, which get us back where we started. Here's a document detailing what uvicorn does with content-length. I think it's a good reference. Basically, we can trust content length but use it to validate the request body. Looking at these scenarios:
Is this in line with how aiohttp behaves right now, or what would need to be added? |
Sounds like it should be correct. I'm not entirely sure what we do with the Content-Length, so if someone would like to create some tests (or check if any already exist), that'd be useful. |
If you're curious, here's a Cloudflare article describing an attack like this. https://www.cloudflare.com/learning/ddos/ddos-attack-tools/r-u-dead-yet-rudy/ |
Yes, but again, there's not much aiohttp can do here. The recommendation is to set reasonable timeouts, which should probably be done in your proxy server (along with WAF and other such protections). If aiohttp doesn't throw an exception in the above 2 mentioned cases, we can make that change, but it really has no impact on a DoS attack. Beyond that, I still see nothing to do here. |
FWIW: if you are using aiohttp; because it is a dependency of s3fs, and you're getting dinged with a PRISMA-2023-0024 finding, you're probably safe. In this use-case, it's only making a 'get' request to an s3 file system (AWS S3 or Minio, etc). This issue only works when an outside client is making a 'put' against aiohttp. |
Literally any webserver can be DOS'd with RUDY if it is misconfigured (IE TTL too high or not rejecting requests with unnecessarily large content-length header). Both of these are configurable in aiohttp. Heck, even if you just have a lot of legitimate traffic come in at the same time from a bunch of users running on 56k modems sending large multipart payloads, this could happen on any http webserver. The fact is, content length headers are not security and should not be thought of as such. Prisma needs to get it together, they are costing their customers collectively massive amounts of time and money mitigating non-issues by continuing to be the outlier on this and singling out aiohttp on an issue that affects all webservers because of an incomplete test run by one person. Not to mention.... That person STILL has not answered the very important question "did you close the connection or leave it open?", while PRISMA-2023-0024 explicitly states "as long as the connection is not dropped by the user". The fact that they included in their statement that the connection must be left open further validates the fact that this is not unique to aiohttp. |
@HarryCaveMan |
Apologies.... It was never reported by OWASP (Dependency Track) or any other container scanner other than Twistlock (IE Synk or AWS ECR), and it was never even in the NVD because no CNA will own it either. This is probably due to the fact that ANY webserver that accepts large multipart payloads and has a high/long TTL could be DOS'd from an RUDY attack, so there is no real use in singling out aiohttp based on a single test with incomplete info performed by one person... I was thinking of a different CVE that was disputed. Not sure why Prisma continues to be the singular outlier on this issue though... |
Describe the bug
a server should not trust client-provided content-length
in a typical pool-based servers, DoS will cause all threads/processes in the pool to be busy, preventing server from accepting more connections
in anon-blocking asyncio server like yours, the service would still be able to accept connections
but the attack will case the server to go out of memory
each request with a 100 MB content, those contents will never be freed because it's still waiting for the never coming bytes.
even if it it uses smaller buffer size, DoS attack will consumer all open files ...etc.
To Reproduce
server.py
create a large file
any large file will work, a 100 MB will work
client
claim to have 120MB content-length causing server to keep 100MB in RAM and block forever
similar cases to reproduce the bug but does not cause DoS
Expected behavior
Logs/tracebacks
aiohttp Version
multidict Version
yarl Version
OS
Fedora 36
$ uname -r
6.1.7-100.fc36.x86_64
Related component
Server
Additional context
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: