Skip to content

Conversation

@Fluf22
Copy link
Contributor

@Fluf22 Fluf22 commented Feb 6, 2026

When the accept-encoding header includes gzip, request bodies are compressed with zlib.gzipSync() and a content-encoding: gzip header is added. Responses with content-encoding: gzip are decompressed via zlib.createGunzip(). Decompression errors resolve gracefully following the existing error handling pattern (never reject).

Includes 8 comprehensive tests covering compression, decompression, error handling, multi-value headers, and full round-trip scenarios.

…onse decompression

When the accept-encoding header includes gzip, request bodies are
compressed with zlib.gzipSync() and a content-encoding: gzip header is
added. Responses with content-encoding: gzip are decompressed via
zlib.createGunzip(). Decompression errors resolve gracefully following
the existing error handling pattern (never reject).

Includes 8 comprehensive tests covering compression, decompression,
error handling, multi-value headers, and full round-trip scenarios.
@Fluf22 Fluf22 self-assigned this Feb 6, 2026
@Fluf22 Fluf22 requested a review from shortcuts February 6, 2026 15:47
Use Buffer.byteLength() to set content-length for both compressed and
uncompressed request bodies, avoiding unnecessary chunked transfer
encoding overhead.
Small request bodies gain little from compression and the gzip overhead
can exceed the savings. Only compress when body size >= 1024 bytes.
Release the native zlib handle promptly instead of waiting for garbage
collection. Covers both the error path (decompression failure, response
stream error) and the timeout path (req.abort mid-transfer).
@Fluf22 Fluf22 requested a review from Ant-hem February 9, 2026 16:59
Comment on lines +109 to +111
gunzip.on('data', onData);
gunzip.on('end', onEnd);
gunzip.on('error', onError);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit rusty in JavaScript so please disregard if not relevant: Why do we have to listen on the gzip object AND the response? We pipe the gzip object on the response so why don't we listen once on the response instead of branching?


if (request.data !== undefined) {
req.write(request.data);
const body = shouldCompress ? zlib.gzipSync(request.data) : request.data;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it really safe to use the sync API in this context? If used in the server side, the requests can be pretty large (multiple MB) so we should probably avoid blocking. I would rather go with the async approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants