-
Notifications
You must be signed in to change notification settings - Fork 227
feat(requester-node-http): add gzip compression for requests and response decompression #1603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v4
Are you sure you want to change the base?
Conversation
…onse decompression When the accept-encoding header includes gzip, request bodies are compressed with zlib.gzipSync() and a content-encoding: gzip header is added. Responses with content-encoding: gzip are decompressed via zlib.createGunzip(). Decompression errors resolve gracefully following the existing error handling pattern (never reject). Includes 8 comprehensive tests covering compression, decompression, error handling, multi-value headers, and full round-trip scenarios.
Use Buffer.byteLength() to set content-length for both compressed and uncompressed request bodies, avoiding unnecessary chunked transfer encoding overhead.
Small request bodies gain little from compression and the gzip overhead can exceed the savings. Only compress when body size >= 1024 bytes.
Release the native zlib handle promptly instead of waiting for garbage collection. Covers both the error path (decompression failure, response stream error) and the timeout path (req.abort mid-transfer).
| gunzip.on('data', onData); | ||
| gunzip.on('end', onEnd); | ||
| gunzip.on('error', onError); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit rusty in JavaScript so please disregard if not relevant: Why do we have to listen on the gzip object AND the response? We pipe the gzip object on the response so why don't we listen once on the response instead of branching?
|
|
||
| if (request.data !== undefined) { | ||
| req.write(request.data); | ||
| const body = shouldCompress ? zlib.gzipSync(request.data) : request.data; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it really safe to use the sync API in this context? If used in the server side, the requests can be pretty large (multiple MB) so we should probably avoid blocking. I would rather go with the async approach.
When the accept-encoding header includes gzip, request bodies are compressed with zlib.gzipSync() and a content-encoding: gzip header is added. Responses with content-encoding: gzip are decompressed via zlib.createGunzip(). Decompression errors resolve gracefully following the existing error handling pattern (never reject).
Includes 8 comprehensive tests covering compression, decompression, error handling, multi-value headers, and full round-trip scenarios.