Skip to content

Optimise decompression size#12357

Open
Dreamsorcerer wants to merge 10 commits intomasterfrom
optimise-iter-chunked
Open

Optimise decompression size#12357
Dreamsorcerer wants to merge 10 commits intomasterfrom
optimise-iter-chunked

Conversation

@Dreamsorcerer
Copy link
Copy Markdown
Member

@Dreamsorcerer Dreamsorcerer commented Apr 12, 2026

Raise the compression max_length if we know the user is going to allow a size larger than the default anyway.
Also get rid of AsyncStreamReaderMixin, which just doesn't make any sense.

@Dreamsorcerer Dreamsorcerer added bot:chronographer:skip This PR does not need to include a change note backport-3.14 Trigger automatic backporting to the 3.14 release branch by Patchback robot labels Apr 12, 2026
@Dreamsorcerer Dreamsorcerer requested a review from asvetlov as a code owner April 12, 2026 21:12
@codecov
Copy link
Copy Markdown

codecov bot commented Apr 12, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 98.92%. Comparing base (0c7ce34) to head (8e0877f).
✅ All tests successful. No failed tests found.

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #12357   +/-   ##
=======================================
  Coverage   98.92%   98.92%           
=======================================
  Files         133      133           
  Lines       46550    46567   +17     
  Branches     2423     2427    +4     
=======================================
+ Hits        46048    46065   +17     
  Misses        373      373           
  Partials      129      129           
Flag Coverage Δ
CI-GHA 98.98% <100.00%> (-0.01%) ⬇️
OS-Linux 98.72% <100.00%> (+<0.01%) ⬆️
OS-Windows 96.98% <90.62%> (-0.01%) ⬇️
OS-macOS 97.87% <90.62%> (-0.01%) ⬇️
Py-3.10.11 97.38% <90.62%> (-0.01%) ⬇️
Py-3.10.20 97.86% <90.62%> (+<0.01%) ⬆️
Py-3.11.15 98.10% <90.62%> (+<0.01%) ⬆️
Py-3.11.9 97.64% <90.62%> (-0.01%) ⬇️
Py-3.12.10 97.73% <90.62%> (-0.01%) ⬇️
Py-3.12.13 98.20% <90.62%> (+<0.01%) ⬆️
Py-3.13.12 98.44% <90.62%> (-0.01%) ⬇️
Py-3.14.3 98.50% <90.62%> (-0.01%) ⬇️
Py-3.14.4t 97.51% <90.62%> (-0.01%) ⬇️
Py-pypy3.11.15-7.3.21 97.34% <90.62%> (-0.01%) ⬇️
VM-macos 97.87% <90.62%> (-0.01%) ⬇️
VM-ubuntu 98.72% <100.00%> (+<0.01%) ⬆️
VM-windows 96.98% <90.62%> (-0.01%) ⬇️
cython-coverage 38.22% <75.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@codspeed-hq
Copy link
Copy Markdown

codspeed-hq bot commented Apr 12, 2026

Merging this PR will improve performance by 55.88%

⚡ 2 improved benchmarks
✅ 59 untouched benchmarks
⏩ 4 skipped benchmarks1

Performance Changes

Benchmark BASE HEAD Efficiency
test_get_request_with_251308_compressed_chunked_payload[zlib_ng.zlib_ng-pyloop] 244 ms 215.5 ms +13.2%
test_get_request_with_251308_compressed_chunked_payload[isal.isal_zlib-pyloop] 113.3 ms 72.7 ms +55.88%

Comparing optimise-iter-chunked (8e0877f) with master (0c7ce34)

Open in CodSpeed

Footnotes

  1. 4 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@Dreamsorcerer Dreamsorcerer changed the title Optimise iter_chunked() Optimise decompression size Apr 12, 2026
@Dreamsorcerer
Copy link
Copy Markdown
Member Author

Dreamsorcerer commented Apr 12, 2026

There's a bit of awkwardness and I'm not sure that's the best approach around .read(). Basically brotlicffi with max_length=0 produces empty bytes and with max_length=sys.maxsize hits a MemoryError as it tries to pre-allocate an entire array of that size. So need to work around that.

res = await stream.readexactly(3)
assert res == b"dat"
assert not stream._protocol.resume_reading.called # type: ignore[attr-defined]
assert stream._protocol.resume_reading.called # type: ignore[attr-defined]
Copy link
Copy Markdown
Member Author

@Dreamsorcerer Dreamsorcerer Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stream here has a limit of 1, so after reading 3 bytes there is still 1 left. Previously that would avoid resuming as it didn't exceed the limit. Now the limit gets implicitly raised to 3, so a resume is triggered. This seems like a good idea to me; if the user asked for 3 bytes, there's a good chance they'll want another 3, so might as well buffer them up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport-3.14 Trigger automatic backporting to the 3.14 release branch by Patchback robot bot:chronographer:skip This PR does not need to include a change note

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant