-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTP::StaticFileHandler serve pre-gzipped content #9626
HTTP::StaticFileHandler serve pre-gzipped content #9626
Conversation
Should this be documented in the handler docs? I would never have guessed this happens automatically. What other web servers do this? |
Absolutely! Let me add some docs.
NGINX: http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html WebPack has a module to emit It's a really common pattern to serve compressed static content. |
@asterite I just added the docs. I also removed the word "simple" because this is probably the most complex handler we have right now in the standard library 😆 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not entirely sure it's a good idea to stuff more and more stuff into static file handler. It's supposed to be just a really simple tool. IMO such more advanced features should not be provided by stdlib. It just adds more and more complexity and we don't have the resources to maintain a really stable, production-ready webserver implementation.
I'm not planning to add excessive complexity, but this was really easy to implement and adds real huge benefit. In some cases it might even delay the need for many people to setup more complex deployments with caching layers or separate webservers for static content. |
What about brotli and other possible encoding algorithms, too marginally used to be worth supporting? |
@j8r maybe, but I didn't want to add more complexity or change the API at this moment |
FYI this specs fail when executed via docker with the source in a mounted container. The mtime of the file is truncated to seconds upon read and set. This makes the test.txt and test.txt.gz mtime to differ in a whole second 😞 . |
This allows serving
.gz
files just like other web servers, avoiding on the fly compression, returning the content offoo.txt.gz
whenfoo.txt
was requested. This reduces CPU usage and response time. For example, in my machine a 1M javascript file is served in around 35ms (compressed with HTTP::CompressHandler). But the time goes down to around 300us when the.gz
file is served instead.The .gz file is only served if it's newer than the uncompressed file. The modification time from the uncompressed file is still used for cache checks and Etag header.
I found that in macOS, doing
gzip -k
doesn't keep the exact same modification time, loosing some precision. That's why I allowed up to 1 millisecond in the past for the compressed file and still being served as the static content. I think it's very unlikely this could cause issues in real world.This PR works great with #9625, now that it allows other handlers to return compressed content.