Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #12357 +/- ##
=======================================
Coverage 98.92% 98.92%
=======================================
Files 133 133
Lines 46550 46567 +17
Branches 2423 2427 +4
=======================================
+ Hits 46048 46065 +17
Misses 373 373
Partials 129 129
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Merging this PR will improve performance by 55.88%
Performance Changes
Comparing Footnotes
|
for more information, see https://pre-commit.ci
…nto optimise-iter-chunked
|
There's a bit of awkwardness and I'm not sure that's the best approach around .read(). Basically brotlicffi with max_length=0 produces empty bytes and with max_length=sys.maxsize hits a MemoryError as it tries to pre-allocate an entire array of that size. So need to work around that. |
| res = await stream.readexactly(3) | ||
| assert res == b"dat" | ||
| assert not stream._protocol.resume_reading.called # type: ignore[attr-defined] | ||
| assert stream._protocol.resume_reading.called # type: ignore[attr-defined] |
There was a problem hiding this comment.
stream here has a limit of 1, so after reading 3 bytes there is still 1 left. Previously that would avoid resuming as it didn't exceed the limit. Now the limit gets implicitly raised to 3, so a resume is triggered. This seems like a good idea to me; if the user asked for 3 bytes, there's a good chance they'll want another 3, so might as well buffer them up.
Raise the compression max_length if we know the user is going to allow a size larger than the default anyway.
Also get rid of AsyncStreamReaderMixin, which just doesn't make any sense.