Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad error handling for IOException while reading incoming request body #749

Closed
davidni opened this issue Jan 14, 2019 · 11 comments · Fixed by #1769
Closed

Bad error handling for IOException while reading incoming request body #749

davidni opened this issue Jan 14, 2019 · 11 comments · Fixed by #1769
Assignees
Labels
bug Identified as a potential bug Jan'24 January 2024 release low Low priority merged Issue has been merged to dev and is waiting for the next release
Milestone

Comments

@davidni
Copy link
Contributor

davidni commented Jan 14, 2019

Expected Behavior

If an IOException happens while reading the incoming request body (in RequestMapper.MapContent here), Ocelot fails the request with an UnmappableRequestError and status code 404.

404 is not appropriate, and the error logged by ocelot should be more specific than UnmappableRequestError. It just so happens that the exception happens while doing request mapping, but it's really an IO problem while reading the incoming request stream. Response status shouldn't be 404. In the case of too large request, 400 might be appropriate. In other cases, something in the 5xx range.

Steps to Reproduce the Problem

This happens consistently, for example, for requests that are too large for the server (HttpSys throws an IOException if the Content-Length of the incoming request is larger than 30MB by default -- see: AspNetCore source).

  1. Send a request to Ocelot, hosted on HttpSys, with Content-Length larger than 30,000,000.
  2. Boom!

Ocelot returns 404, and logs UnmappableRequestError.

Specifications

  • Version: 12.0.1 (should also apply to 13.0.0 by code inspection)
  • Platform: Windows Server 2016 / .NET Framework 4.6.2
@philproctor philproctor added bug Identified as a potential bug help wanted Not actively being worked on. If you plan to contribute, please drop a note. labels Jan 16, 2019
@Jalalx
Copy link

Jalalx commented Nov 9, 2020

I think 413 Payload Too Large is a better error code. What do you guys think?
@TomPallister

@ThreeMammals ThreeMammals deleted a comment from daoye Oct 16, 2023
@raman-m
Copy link
Member

raman-m commented Oct 16, 2023

@davidni commented on Jan 15, 2019

David, thanks for reporting!
I don't see any PRs from you...
Is it still relevant? The issue was created for the old legacy version of Ocelot.
Do you have an intention to discuss the bug?


404 is not appropriate, and the error logged by ocelot should be more specific than UnmappableRequestError. It just so happens that the exception happens while doing request mapping, but it's really an IO problem while reading the incoming request stream. Response status shouldn't be 404. In the case of too large request, 400 might be appropriate. In other cases, something in the 5xx range.

You might be surprised, but Ocelot has strange error handling mechanism with status code overriding aka Http Error Status Codes (docs in source) feature. This is Tom's design. In future it should be redesigned. I don't like HTTP status codes overriding, because the actual status is lost in Ocelot core. And, I guess, Ocelot doesn't return actual code in custom response header.


In other cases, something in the 5xx range.

What cases are you talking about?


400 might be appropriate

Why Not? When do you create a PR with a bug fix?
In my opinion, 411 Length Required, 413 Content Too Large are more appropriate, because they are more specific.

@raman-m raman-m added needs validation Issue has not been replicated or verified yet needs feedback Issue is waiting on feedback before acceptance medium effort Likely a few days of development effort small effort Likely less than a day of development effort. labels Oct 16, 2023
@raman-m
Copy link
Member

raman-m commented Oct 16, 2023

@Jalalx commented on Nov 9, 2020

Agree on 413! This is the best status code to return in case of Content-Length problems with request.

P.S. Don't reference Tom in chats. He doesn't read Ocelot GitHub notifications since 2020... Only current maintainers read notifications, it's me and Raynald.

@ks1990cn
Copy link
Contributor

@raman-m This is issue present with latest Ocelot version too. I am able to replicate now. Was missing to configure UseHttpSys().

@raman-m raman-m added question Initially seen a question could become a new feature or bug or closed ;) waiting Waiting for answer to question or feedback from issue raiser labels Oct 16, 2023
@raman-m
Copy link
Member

raman-m commented Oct 16, 2023

@ks1990cn I would prefer to wait for a reply from issue raiser, because I need to understand his vision and idea of fixing this bug.
If he will be silent then he will be unassigned.
Let's wait for a couple of weeks, up to one month...

@ks1990cn
Copy link
Contributor

ks1990cn commented Oct 18, 2023

Please refer to dummy changes I made on RequestMapper.cs on my branch https://github.com/ks1990cn/Ocelot/tree/ts/issue_729.

@raman-m I respect your statement & waiting for @davidni to come with his idea on this bug. I am just trying to understand flow and understand what I can do to fix this, if anything comes up. Meanwhile I come up with following question:-

On RequestMapper.cs of my branch, I have created another parameterized constructor which gives different MaximumRequestBodySize on different host server IIS, Http.Sys, kestrel. Is there any generic way to figure out MaximumRequestBodySize irrespective of hosted server?

note - I can see on this stackoverflow discussion they use and define server specific only while doing on Global level.
https://stackoverflow.com/questions/38698350/increase-upload-file-size-in-asp-net-core

@raman-m
Copy link
Member

raman-m commented Oct 18, 2023

@ks1990cn OK, great! In my opinion we can wait for David's reply for a months, and years... But who knows...
If your solution is ready, you could open a PR to develop branch.
Also, pay attention to the fact, we have another PR #1724 which relates to RequestMapper!

@ks1990cn
Copy link
Contributor

Sure, I am looking into it.. Just I am trying to figure out -

On which hosting server current application is hosted, because MaximumRequestBodySize gives 30MB in every hosted server - IIS, https.sys, kestrel & we can modify it too, but how to find which one we are using in current application.

Looking into aspnetcore application too, to understand how I can find this value.

@raman-m raman-m added accepted Bug or feature would be accepted as a PR or is being worked on and removed help wanted Not actively being worked on. If you plan to contribute, please drop a note. question Initially seen a question could become a new feature or bug or closed ;) medium effort Likely a few days of development effort small effort Likely less than a day of development effort. needs validation Issue has not been replicated or verified yet needs feedback Issue is waiting on feedback before acceptance waiting Waiting for answer to question or feedback from issue raiser labels Oct 20, 2023
@raman-m
Copy link
Member

raman-m commented Oct 30, 2023

Steps to Reproduce the Problem

This happens consistently, for example, for requests that are too large for the server (HttpSys throws an IOException if the Content-Length of the incoming request is larger than 30MB by default -- see: AspNetCore source).

  1. Send a request to Ocelot, hosted on HttpSys, with Content-Length larger than 30,000,000.
  2. Boom!

We need to fix this issue for HttpSys only, right?
But Ocelot team is not responsible for third-party deployments and hosting, except Docker and Kestrel hosting, with limited support of IIS user cases.
It seems this is very rare user case. And, It will be not easy to fix that because HttpSys IIS environment must be deployed for testing...

@davidni FYI
And, come back to us with ready PR please.

@raman-m raman-m removed the accepted Bug or feature would be accepted as a PR or is being worked on label Oct 30, 2023
@ThreeMammals ThreeMammals deleted a comment from ks1990cn Oct 30, 2023
@ThreeMammals ThreeMammals deleted a comment from ks1990cn Oct 30, 2023
@ggnaegi
Copy link
Member

ggnaegi commented Oct 30, 2023

@raman-m The error handling is hiding the 413 exception payload too large. I've tested this with Kestrel, it's throwing a BadHttpRequestException. I will create a PR as soon as I have some time...

catch (BadHttpRequestException badHttpRequest)
{
    return badHttpRequest.StatusCode == 413 ? new ErrorResponse<HttpRequestMessage>(new PayloadTooLargeError(badHttpRequest)) : new ErrorResponse<HttpRequestMessage>(new UnmappableRequestError(badHttpRequest));
}

@raman-m raman-m added the low Low priority label Oct 31, 2023
@raman-m raman-m assigned ggnaegi and unassigned davidni and ks1990cn Oct 31, 2023
@raman-m
Copy link
Member

raman-m commented Oct 31, 2023

@ggnaegi commented on Oct 30

Great! But with low priority... when you will have free time.

@raman-m raman-m added the accepted Bug or feature would be accepted as a PR or is being worked on label Jan 7, 2024
@raman-m raman-m added the Jan'24 January 2024 release label Feb 22, 2024
@raman-m raman-m added this to the January'24 milestone Feb 22, 2024
raman-m added a commit that referenced this issue Mar 1, 2024
…est body (#1769)

* Adding "quick and..." fix to the payload too large exception, adding two acceptance tests for kestrel and http sys.

* skipping test if platform is not windows

* Moving Payload too large error to HttpExceptionErrorMapper. A review of the exception handling is needed.

* Fix issues after merge in Steps.cs

* using collection expression here

* Code review by @raman-m

---------

Co-authored-by: Raman Maksimchuk <dotnet044@gmail.com>
@raman-m raman-m added merged Issue has been merged to dev and is waiting for the next release and removed accepted Bug or feature would be accepted as a PR or is being worked on labels Mar 1, 2024
@raman-m raman-m mentioned this issue Mar 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Identified as a potential bug Jan'24 January 2024 release low Low priority merged Issue has been merged to dev and is waiting for the next release
Projects
None yet
6 participants