feat: add long-term chunk geometry caching#529
feat: add long-term chunk geometry caching#529felmonon wants to merge 21 commits intozardoy:nextfrom
Conversation
Implements chunk geometry caching to improve performance when returning to previously visited areas: - IndexedDB-based caching for servers that support chunk-cache channel - In-memory caching with LRU eviction for other servers - Automatic cache invalidation when blocks change - Cache hit/miss tracking for performance monitoring Files added: - src/chunkGeometryCache.ts: IndexedDB cache manager with LRU eviction - renderer/viewer/lib/chunkCacheIntegration.ts: Block hash computation utilities Files modified: - src/customChannels.ts: Register chunk-cache channel for server support detection - renderer/viewer/lib/worldrendererCommon.ts: Integrate cache into geometry handling This addresses issue zardoy#476 - longterm chunk geometry caching. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use numeric separators for FNV hash constants - Use for-of loop instead of indexed for loop - Use optional chaining for crypto.subtle check - Use spread operator instead of Array.from - Simplify boolean return in isGeometryCacheable Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove multi-space inline comments - Add readonly modifier to memoryCache - Use spread operator instead of Array.from() - Add async keyword to promise-returning functions Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…uard - Fix computeBlockHashAsync to pass typed array view instead of .buffer - Add !useChangeWorker check to cache-hit shortcut in setSectionDirty - Add sectionHashes validation in tryUseCachedGeometry - Add WebCrypto guard with FNV-1a fallback in generateBlockHash - Set serverAddress even when channel is unsupported for cache scoping Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add computeChunkDataHash to populate sectionHashes in addColumn - Fix tryUseCachedGeometry to properly complete sectionFinished workflow - Update clear() to only remove current server's entries, not all data - Delete sectionHashes entries when invalidating section cache Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move eslint-disable comment to the correct line to properly suppress unicorn/prefer-spread rule. ArrayLike<number> is not Iterable so we must use Array.from() instead of spread operator. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add chunkPacketCache.ts for storing raw map_chunk packet data - Redesign protocol per maintainer feedback: 1. Client sends cached chunk hashes on login 2. Server responds with cacheHit or hash for new chunks 3. Client emits cached packets locally on cache hit - Use FNV-1a hash algorithm (reproducible in Java) - Cache both packet data and geometry for optimal performance Co-Authored-By: Claude <noreply@anthropic.com>
- Replace direct IndexedDB API with browserfs fs.promises
- Store chunks as files: /data/chunk-cache/{server}/{x},{z}.bin
- Store metadata in metadata.json per server directory
- Server isolation via directory structure (fixes cross-server collision)
- Debounced metadata saves for better performance
- Maintain same public API for compatibility
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Migrate chunkGeometryCache.ts from IndexedDB to browserfs fs storage - Fix channel registration to use custom_payload listener instead of try/catch - Add clearAllBlockStates() calls in resetWorld/destroy to prevent memory leaks Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix import order (type imports before regular imports) - Use replaceAll instead of replace with /g flag - Improve map_chunk serialization to handle all version-specific fields - Properly reconstruct typed arrays with correct constructor type Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add requestChunkResend function to recover when cache-hit fails - Request chunk resend when cached data is missing or deserialization fails - Fix ArrayBuffer alignment issues with Buffer.slice for proper bounds - Ensure TextEncoder output has proper ArrayBuffer bounds Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add TTL-based cleanup for stale pending hash entries (30s timeout) - Clear pendingChunkHashes on bot disconnect - Add periodic cleanup interval (every 10s) - Stop cleanup interval when bot ends Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix pendingChunkHashes memory leak (combine disconnect handlers) - Add cache-hit failure recovery (request resend on corrupt cache) - Add deserialize validation - Add comprehensive JSDoc documentation - Clarify server scoping in comments - Handle pre-1.13 REGISTER channel Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Flush pending metadata saves before switching servers - Move metadata updates inside serverSupportsChannel blocks to prevent unbounded growth - Add type guards to computeChunkDataHash for safer input handling
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds in-memory and on-disk chunk geometry and packet caches, integrates section-keyed geometry reuse and invalidation into the renderer, and wires a client↔server chunk-cache protocol with hashing, packet serialization, and replay helpers. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant WorldRenderer
participant GeometryCache as GeometryCache
participant Worker as GeometryWorker
participant Server
Client->>WorldRenderer: request chunk load (x,z)
WorldRenderer->>GeometryCache: computeBlockHash / lookup sectionKey
alt cache hit (hash matches)
GeometryCache-->>WorldRenderer: cached geometry
WorldRenderer->>WorldRenderer: emit synthetic 'geometry' message (simulate worker)
WorldRenderer->>WorldRenderer: update bookkeeping (sectionsWaiting, etc.)
else cache miss
WorldRenderer->>Worker: dispatch geometry generation for section
Worker-->>WorldRenderer: 'geometry' message (generated geometry)
WorldRenderer->>GeometryCache: store geometry + blockHash
GeometryCache->>GeometryCache: persist to memory (and disk if enabled)
end
rect rgba(100,150,255,0.5)
Client->>Server: login handshake + advertise chunk-cache support
Server-->>Client: report cacheHit / cacheMiss for specific chunk hashes
alt server instructs to use cached packet
Client->>Client: retrieve packet from ChunkPacketCache
Client-->>Server: replay request or emit packet locally
else server requests full chunk
Client->>Server: send/relay map_chunk; Client->>ChunkPacketCache: store packet
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
src/chunkGeometryCache.ts (1)
257-289: Add validation for transparent geometry fields.The deserialization validates
positionsandindicesbut the transparent geometry arrays (t_positions,t_normals, etc.) are passed through without validation. If these are undefined in the serialized data but expected by consumers, it could cause runtime errors.💡 Suggested validation
deserializeGeometry (serialized: SerializedGeometry): MesherGeometryOutput { // Validate required fields exist if (!serialized.positions || !serialized.indices) { throw new Error('Serialized geometry missing required fields (positions or indices)') } return { sx: serialized.sx, sy: serialized.sy, sz: serialized.sz, positions: new Float32Array(serialized.positions), normals: new Float32Array(serialized.normals), colors: new Float32Array(serialized.colors), uvs: new Float32Array(serialized.uvs), - t_positions: serialized.t_positions, - t_normals: serialized.t_normals, - t_colors: serialized.t_colors, - t_uvs: serialized.t_uvs, + t_positions: serialized.t_positions ? new Float32Array(serialized.t_positions) : undefined, + t_normals: serialized.t_normals ? new Float32Array(serialized.t_normals) : undefined, + t_colors: serialized.t_colors ? new Float32Array(serialized.t_colors) : undefined, + t_uvs: serialized.t_uvs ? new Float32Array(serialized.t_uvs) : undefined,renderer/viewer/lib/worldrendererCommon.ts (1)
1022-1028: Cache bypassed whenuseChangeWorkeris true, even if cached geometry is valid.The condition
!useChangeWorkerprevents cache usage for block update scenarios. While this may be intentional to ensure proper worker tracking, it means block updates always regenerate geometry even when neighboring sections haven't actually changed.Consider whether this exclusion is necessary for all
useChangeWorkercases, or only when the section itself was modified (which is already handled byinvalidateSectionCache).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@renderer/viewer/lib/worldrendererCommon.ts` around lines 1022 - 1028, The cache is being forcibly bypassed whenever useChangeWorker is true due to the `if (value && !useChangeWorker && this.tryUseCachedGeometry(key))` check; change this so cached geometry is still used for change-worker updates unless the specific section was invalidated. Update the condition to allow cache hits when useChangeWorker is true but the section is not marked dirty (e.g., replace `!useChangeWorker` with a check like `!this.isSectionInvalidated(key)` or have `invalidateSectionCache` set a per-section dirty flag that `tryUseCachedGeometry(key)` or a new `isSectionInvalidated(key)` reads), and keep the `logWorkerWork(() => `<- cache hit for section ${key}`)` and early return behavior intact.src/customChannels.ts (1)
889-919: Handle nested buffers in serialization.The
serializeMapChunkPacketfunction only processes top-level Buffer and TypedArray fields. If packet fields likeblockEntities,heightmaps, orbiomescontain nested objects with Buffer data (which is common in NBT/structured packet formats), those nested buffers will not be properly serialized and will be lost when JSON-encoded, causing deserialization to fail. Consider implementing recursive buffer detection and marking to handle arbitrarily nested buffer structures.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/customChannels.ts` around lines 889 - 919, serializeMapChunkPacket only converts buffers/typed arrays at the top level so nested Buffer/TypedArray/ArrayBuffer values (e.g., inside blockEntities, heightmaps, biomes) are lost; change the implementation to walk the packet recursively (handle objects and arrays) and replace any Buffer, ArrayBuffer or ArrayBuffer view with the same marker objects you already use ({__type:'buffer'|'typedArray', arrayType?, data: [...]}) so nested buffers are preserved for JSON.stringify and later reconstruction; update references to the function serializeMapChunkPacket and ensure the traversal preserves null/undefined and primitive values and avoids infinite loops (e.g., detect cycles) while keeping the existing serialization shape.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@renderer/viewer/lib/worldrendererCommon.ts`:
- Around line 670-675: The current code uses computeChunkDataHash(chunk) to set
one hash for every sectionKey, causing section-level cache invalidation bugs;
update the logic that populates this.sectionHashes (used by
tryUseCachedGeometry) to compute and store a per-section hash instead: for each
section iterate the section's block-state Uint16Array (the same data stored in
storeSectionBlockStates) and call computeBlockHash on that per-section array,
then set this.sectionHashes.set(sectionKey, sectionHash) rather than the single
chunkHash; ensure references to computeChunkDataHash remain for chunk-wide use
only and that tryUseCachedGeometry continues comparing cached.hash to these
per-section hashes.
In `@src/chunkGeometryCache.ts`:
- Around line 36-60: The SerializedGeometry interface has a mismatched property
name: remove or rename transparentIndicesStart to match the actual property name
used in MesherGeometryOutput; locate MesherGeometryOutput to confirm the correct
identifier (e.g., transparentIndicesOffset or transparentStartIndex) and update
SerializedGeometry accordingly (in the interface definition), ensuring any code
that reads/writes this serialized field uses the exact same property name.
- Line 279: The code references a non-existent property transparentIndicesStart
on MesherGeometryOutput during serialization and deserialization; find the
actual property name on MesherGeometryOutput (e.g., transparentIndices* or
similar) and replace transparentIndicesStart in both the serialization block
that builds the serialized object and the deserialization site that reads
serialized.transparentIndicesStart so they use the correct property identifier;
update the serialization key and the corresponding deserialization access
together (search for occurrences of transparentIndicesStart and the
MesherGeometryOutput type to locate both places).
---
Nitpick comments:
In `@renderer/viewer/lib/worldrendererCommon.ts`:
- Around line 1022-1028: The cache is being forcibly bypassed whenever
useChangeWorker is true due to the `if (value && !useChangeWorker &&
this.tryUseCachedGeometry(key))` check; change this so cached geometry is still
used for change-worker updates unless the specific section was invalidated.
Update the condition to allow cache hits when useChangeWorker is true but the
section is not marked dirty (e.g., replace `!useChangeWorker` with a check like
`!this.isSectionInvalidated(key)` or have `invalidateSectionCache` set a
per-section dirty flag that `tryUseCachedGeometry(key)` or a new
`isSectionInvalidated(key)` reads), and keep the `logWorkerWork(() => `<- cache
hit for section ${key}`)` and early return behavior intact.
In `@src/customChannels.ts`:
- Around line 889-919: serializeMapChunkPacket only converts buffers/typed
arrays at the top level so nested Buffer/TypedArray/ArrayBuffer values (e.g.,
inside blockEntities, heightmaps, biomes) are lost; change the implementation to
walk the packet recursively (handle objects and arrays) and replace any Buffer,
ArrayBuffer or ArrayBuffer view with the same marker objects you already use
({__type:'buffer'|'typedArray', arrayType?, data: [...]}) so nested buffers are
preserved for JSON.stringify and later reconstruction; update references to the
function serializeMapChunkPacket and ensure the traversal preserves
null/undefined and primitive values and avoids infinite loops (e.g., detect
cycles) while keeping the existing serialization shape.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2bd731ef-302d-4a72-9ae2-0c3ae64c4053
📒 Files selected for processing (5)
renderer/viewer/lib/chunkCacheIntegration.tsrenderer/viewer/lib/worldrendererCommon.tssrc/chunkGeometryCache.tssrc/chunkPacketCache.tssrc/customChannels.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
src/chunkGeometryCache.ts (1)
478-511: Consider batching file deletions for disk eviction.The disk eviction loop deletes files sequentially, which could be slow when evicting many entries. Consider using
Promise.allwith a concurrency limit for better performance while avoiding I/O saturation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/chunkGeometryCache.ts` around lines 478 - 511, The evictOldEntries method currently deletes section files sequentially which is slow; update evictOldEntries to collect the candidate section entries (from metadata.sections) and perform file deletions in parallel with a bounded concurrency (e.g., use a small worker pool or a library like p-limit) while still removing entries from this.memoryCache and delete-ing metadata.sections immediately or atomically; use this.getSectionPath and existsViaStats inside each deletion task and handle per-task errors without failing the whole batch, then await Promise.all on the limited set of tasks and adjust the final debug message to reflect the actual number of deletions completed.renderer/viewer/lib/chunkCacheIntegration.ts (1)
261-286: Consider unique fallback hashes or skipping cache for invalid data.When
computeChunkDataHashreturns the fallback"00000000"for invalid/unknown input types, multiple invalid chunks would share the same hash, potentially causing incorrect cache hits. Consider returningnullto signal that caching should be skipped for this chunk.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@renderer/viewer/lib/chunkCacheIntegration.ts` around lines 261 - 286, computeChunkDataHash currently returns a shared fallback string "00000000" for invalid/unknown input, causing different invalid chunks to collide in cache; change computeChunkDataHash to return null instead of "00000000" for the two fallback branches (invalid data and unknown type), update its signature/type to string | null, and then update all call sites that use computeChunkDataHash (e.g., cache key generation and chunk cache lookup/insert code) to treat a null return as "skip caching for this chunk" rather than using the hash (do not attempt to insert or lookup in cache when null). Ensure logging still warns on invalid data but do not produce a shared hash.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@renderer/viewer/lib/chunkCacheIntegration.ts`:
- Around line 207-222: The cache misses are caused by mismatched hashing:
generateBlockHash uses SHA-256 when WebCrypto is available while
computeBlockHash uses FNV-1a (8-char hex); update generateBlockHash to always
compute the same FNV-1a 8-character hex (e.g., by calling or reusing
computeBlockHash) so both sides produce identical hashes, and remove
computeBlockHashAsync if it is unused (or alternatively export generateBlockHash
and replace calls to computeBlockHash with it) to ensure one consistent hashing
implementation across the codebase.
---
Nitpick comments:
In `@renderer/viewer/lib/chunkCacheIntegration.ts`:
- Around line 261-286: computeChunkDataHash currently returns a shared fallback
string "00000000" for invalid/unknown input, causing different invalid chunks to
collide in cache; change computeChunkDataHash to return null instead of
"00000000" for the two fallback branches (invalid data and unknown type), update
its signature/type to string | null, and then update all call sites that use
computeChunkDataHash (e.g., cache key generation and chunk cache lookup/insert
code) to treat a null return as "skip caching for this chunk" rather than using
the hash (do not attempt to insert or lookup in cache when null). Ensure logging
still warns on invalid data but do not produce a shared hash.
In `@src/chunkGeometryCache.ts`:
- Around line 478-511: The evictOldEntries method currently deletes section
files sequentially which is slow; update evictOldEntries to collect the
candidate section entries (from metadata.sections) and perform file deletions in
parallel with a bounded concurrency (e.g., use a small worker pool or a library
like p-limit) while still removing entries from this.memoryCache and delete-ing
metadata.sections immediately or atomically; use this.getSectionPath and
existsViaStats inside each deletion task and handle per-task errors without
failing the whole batch, then await Promise.all on the limited set of tasks and
adjust the final debug message to reflect the actual number of deletions
completed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c837d851-fa6c-404c-863a-05eef4b09822
📒 Files selected for processing (4)
renderer/viewer/lib/chunkCacheIntegration.tsrenderer/viewer/lib/worldrendererCommon.tssrc/chunkGeometryCache.tssrc/customChannels.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- renderer/viewer/lib/worldrendererCommon.ts
|
@zardoy CI is green on this PR now, including the Is there anything else you want changed on the client side here, or is this mainly waiting on real server/in-game testing now? |
I don't think so, unfortunately I didn't do in time while the project was in active development, the server side is not maintained anymore but if you want I can add to the typescript repo where these changes should be implemented and you can test it there if you want |
|
Yes please! I'd love to implement and test the server-side changes in the TypeScript repo. Could you point me to where those changes should go? |
- Extract shared FNV-1a hash into blockHash.ts (fixes hash mismatch: SHA-256 vs FNV-1a caused 100% cache misses when WebCrypto available) - Extract replay tracking into chunkCacheReplay.ts; emit both 'packet' and 'map_chunk' events so mineflayer world loader processes cache hits - Fix per-section hashing in worldrendererCommon.ts (was chunk-level) - computeChunkDataHash returns null instead of '00000000' sentinel to prevent false cache hits across chunks with invalid data - Parallelize evictOldEntries file deletions with Promise.all in both chunkGeometryCache and chunkPacketCache - Deserialize transparent geometry arrays as Float32Array (not number[]) - Remove dead computeBlockHashAsync export (unused async wrapper) - Add 63 tests across all cache modules covering: disk persistence, memory-only mode, hash mismatch, invalidation, server isolation, clear, transparent geometry round-trip, Uint32Array indices, stale-file cleanup, computePacketHash, hasValidCache, getCachedChunksInfo, section block state store/get/clear, parseSectionKey, isGeometryCacheable, computeChunkDataHash
|
Quick update: I pushed This follow-up mostly tightens the edges around the cache work rather than changing the core approach. I cleaned up the new tests so lint passes, simplified The earlier cache fixes are still in place here too: shared block hashing, replayed
All of those pass locally, and |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (8)
renderer/viewer/lib/chunkCacheIntegration.test.ts (2)
3-15: Minor: Import order.The lint hint suggests
../../../src/blockHashshould be imported before./chunkCacheIntegrationfor consistent ordering. This is a stylistic preference.♻️ Optional fix
+import { computeBlockStateHash } from '../../../src/blockHash' import { computeBlockHash, computeChunkDataHash, createSectionKey, clearAllBlockStates, clearSectionBlockStates, extractChunkSectionBlockStates, getSectionBlockStates, isGeometryCacheable, parseSectionKey, storeSectionBlockStates, } from './chunkCacheIntegration' -import { computeBlockStateHash } from '../../../src/blockHash'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@renderer/viewer/lib/chunkCacheIntegration.test.ts` around lines 3 - 15, Reorder the imports so that external/module-relative imports come before local-relative ones: move the import of computeBlockStateHash from '../../../src/blockHash' to appear above the grouped import from './chunkCacheIntegration'. Update the top of chunkCacheIntegration.test.ts so computeBlockStateHash is imported first (before functions like computeBlockHash, computeChunkDataHash, createSectionKey, clearAllBlockStates, clearSectionBlockStates, extractChunkSectionBlockStates, getSectionBlockStates, isGeometryCacheable, parseSectionKey, storeSectionBlockStates) to satisfy the lint ordering rule.
17-17: Usingrequire()for prismarine-chunk.The dynamic
require()is likely intentional to support version-specific chunk loading at runtime. If the project uses ESM exclusively, consider using dynamicimport()instead, but this may depend on prismarine-chunk's module format.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@renderer/viewer/lib/chunkCacheIntegration.test.ts` at line 17, The test uses a CommonJS require for prismarine-chunk via the prismarineChunk variable which won't work in an ESM-only codebase; replace the dynamic require with a dynamic import (e.g., use await import('prismarine-chunk') where the test setup/runtime can be async) and update uses to handle a possible default export (check prismarine-chunk export shape and use imported.default if needed); ensure the import is performed at runtime in the same place where prismarineChunk was required so version-specific loading still works and tests remain async-compatible.src/chunkPacketCache.ts (2)
345-370:clear()iterates and deletes frommemoryCacheduring iteration—safe in modern JS but consider simplification.Deleting keys while iterating over
Map.keys()is safe in JavaScript (the iterator is live and handles deletions). However, a simpler approach would be to just callthis.memoryCache.clear()sincesetServerInfoalready clears the whole map on server switch.♻️ Optional simplification
async clear (): Promise<void> { - // Clear memory cache for current server - const serverPrefix = `${this.serverAddress}:` - for (const key of this.memoryCache.keys()) { - if (key.startsWith(serverPrefix)) { - this.memoryCache.delete(key) - } - } + // Clear memory cache (only current server's entries exist after setServerInfo) + this.memoryCache.clear()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/chunkPacketCache.ts` around lines 345 - 370, The clear() method currently iterates over this.memoryCache.keys() and deletes matching keys, which is unnecessary—replace that loop with a single this.memoryCache.clear() call to simplify and ensure the in-memory cache is fully emptied; update the clear() implementation to call this.memoryCache.clear() (referencing the clear() method and memoryCache property) and rely on existing setServerInfo behavior for server switches.
102-120: Potential race:loadMetadatais async but callers may proceed immediately.After
setServerInforeturns,loadMetadatamay still be in progress. Ifget()orgetCachedChunksInfo()is called immediately, they may see an emptythis.metadata.chunkseven though disk entries exist.The integration in
customChannels.tschainssetServerInfothroughcacheStatePromiseand awaits it before packet handling, which mitigates this. However, direct callers ofsetServerInfocould encounter stale metadata reads if they don't await the returned promise.Since the promise is awaited and the cache is designed to be eventually consistent (cache misses trigger re-fetch), this is acceptable, but worth noting for future callers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/chunkPacketCache.ts` around lines 102 - 120, setServerInfo currently can lead to a race where callers read metadata before loadMetadata finishes; ensure callers can reliably wait for metadata by making setServerInfo set and return a cache-ready promise that resolves only after loadMetadata completes (e.g., assign this.cacheStatePromise = this.loadMetadata() and await it inside setServerInfo), and/or add an explicit waitForMetadata or waitForCacheReady method that returns that promise; update usages to await this.cacheStatePromise or the new wait method before calling get() or getCachedChunksInfo().renderer/viewer/lib/chunkCacheIntegration.ts (3)
217-227: All-or-nothing extraction may cause unnecessary cache fallback.If any single section fails to decode (line 221 returns
null), the entire chunk extraction fails and the caller falls back tocomputeChunkDataHash. This is conservative but means a single malformed section causes all sections to lose per-section hash precision.Consider continuing with successfully decoded sections and only skipping the failed ones, or logging which section failed for debugging.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@renderer/viewer/lib/chunkCacheIntegration.ts` around lines 217 - 227, The loop over parsedChunk.sections should not abort the whole extraction when decodeChunkSectionBlockStates fails for one section; change the behavior in the for-loop where decodeChunkSectionBlockStates(sectionValue) currently leads to an early return null — instead, skip that section (continue), optionally log the failed sectionIndex or section Y (using sectionIndex and minY + sectionIndex * 16) for debugging, and still populate sectionBlockStatesByY for successfully decoded sections so callers get per-section hashes rather than falling back to computeChunkDataHash for the entire chunk.
137-141: Consider handling out-of-bounds palette index more explicitly.If
paletteexists butvalueexceedspalette.length,palette[value]returnsundefined, and the nullish coalescing falls back tovalue. This is likely intentional for direct palette types, but the implicit fallback may mask malformed data.Consider adding a bounds check or comment clarifying the expected behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@renderer/viewer/lib/chunkCacheIntegration.ts` around lines 137 - 141, The loop in chunkCacheIntegration that maps values to blockStates uses palette?.[value] ?? value which silently falls back when value is outside palette bounds; update the logic in the for loop (the block that calls getSerializedBitArrayValue and assigns blockStates[index]) to explicitly check if palette is defined and value < palette.length before using palette[value], otherwise either (a) fall back to value intentionally with an inline comment clarifying this behavior, or (b) log/throw an error for malformed data; choose one approach and implement it so out-of-bounds palette accesses are handled explicitly rather than relying on the nullish coalescing.
70-71: Consider moving block state storage to per-instance scope for better separation of concerns.
sectionBlockStatesis a module-level globalMapshared across potential renderer instances. While the current architecture enforces a singleton renderer pattern (window.world), making this storage per-instance would improve separation of concerns and prevent issues if multi-renderer support is added in the future (e.g., split-screen, picture-in-picture).This is a design consideration rather than a current bug, but scoping the cache per-renderer instance would be more resilient.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@renderer/viewer/lib/chunkCacheIntegration.ts` around lines 70 - 71, The module-level Map named sectionBlockStates should be made an instance property on the renderer/chunk-cache integration so each renderer has its own block-state cache; locate all uses of sectionBlockStates in chunkCacheIntegration.ts and replace them with a per-instance property (e.g., this.sectionBlockStates) initialized in the constructor or factory for the renderer/integration class, update any helper functions to accept the instance or bind them as methods so they reference the instance property instead of the global, and remove the top-level const sectionBlockStates to avoid shared state across renderer instances.renderer/viewer/lib/worldrendererCommon.ts (1)
846-859: Consider adding error handling for rejectedaddColumncalls.
Promise.allat line 848 will reject if anyaddColumncall throws. Thefinallyblock ensures cleanup runs, but the rejection propagates unhandled (the async IIFE result is voided at line 846). This could cause unobserved promise rejections.Consider wrapping individual
addColumncalls or adding a.catch()to log failures gracefully.♻️ Proposed fix
void (async () => { try { await Promise.all(currentLoadChunkBatch!.data.map(async (args) => { - this.queuedChunks.delete(`${args[0]},${args[1]}`) - await this.addColumn(...args as Parameters<typeof this.addColumn>) + try { + this.queuedChunks.delete(`${args[0]},${args[1]}`) + await this.addColumn(...args as Parameters<typeof this.addColumn>) + } catch (err) { + console.error(`Failed to load chunk ${args[0]},${args[1]}:`, err) + } })) } finally {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@renderer/viewer/lib/worldrendererCommon.ts` around lines 846 - 859, The Promise.all call that awaits currentLoadChunkBatch!.data.map(async (args) => { ... await this.addColumn(...args) }) can reject and produce unobserved promise rejections; change it so individual addColumn failures are handled (e.g., replace Promise.all with Promise.allSettled or wrap each addColumn call in a try/catch or .catch that logs the error) while still deleting the queuedChunks entry for that args and preserving the finally cleanup of queuedFunctions/currentLoadChunkBatch; update the async IIFE around currentLoadChunkBatch, the map callback that calls this.addColumn, and any logging to ensure failures are logged (include function names addColumn, queuedChunks, queuedFunctions, currentLoadChunkBatch) instead of letting the rejection propagate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@renderer/viewer/lib/chunkCacheIntegration.ts`:
- Around line 31-32: ESLint flagged the import order: non-type imports must come
before type imports — in chunkCacheIntegration.ts move the non-type import
computeBlockStateHash so it appears above the type-only import
MesherGeometryOutput; update the imports so computeBlockStateHash is imported
first and MesherGeometryOutput (type) follows to satisfy the linter.
---
Nitpick comments:
In `@renderer/viewer/lib/chunkCacheIntegration.test.ts`:
- Around line 3-15: Reorder the imports so that external/module-relative imports
come before local-relative ones: move the import of computeBlockStateHash from
'../../../src/blockHash' to appear above the grouped import from
'./chunkCacheIntegration'. Update the top of chunkCacheIntegration.test.ts so
computeBlockStateHash is imported first (before functions like computeBlockHash,
computeChunkDataHash, createSectionKey, clearAllBlockStates,
clearSectionBlockStates, extractChunkSectionBlockStates, getSectionBlockStates,
isGeometryCacheable, parseSectionKey, storeSectionBlockStates) to satisfy the
lint ordering rule.
- Line 17: The test uses a CommonJS require for prismarine-chunk via the
prismarineChunk variable which won't work in an ESM-only codebase; replace the
dynamic require with a dynamic import (e.g., use await
import('prismarine-chunk') where the test setup/runtime can be async) and update
uses to handle a possible default export (check prismarine-chunk export shape
and use imported.default if needed); ensure the import is performed at runtime
in the same place where prismarineChunk was required so version-specific loading
still works and tests remain async-compatible.
In `@renderer/viewer/lib/chunkCacheIntegration.ts`:
- Around line 217-227: The loop over parsedChunk.sections should not abort the
whole extraction when decodeChunkSectionBlockStates fails for one section;
change the behavior in the for-loop where
decodeChunkSectionBlockStates(sectionValue) currently leads to an early return
null — instead, skip that section (continue), optionally log the failed
sectionIndex or section Y (using sectionIndex and minY + sectionIndex * 16) for
debugging, and still populate sectionBlockStatesByY for successfully decoded
sections so callers get per-section hashes rather than falling back to
computeChunkDataHash for the entire chunk.
- Around line 137-141: The loop in chunkCacheIntegration that maps values to
blockStates uses palette?.[value] ?? value which silently falls back when value
is outside palette bounds; update the logic in the for loop (the block that
calls getSerializedBitArrayValue and assigns blockStates[index]) to explicitly
check if palette is defined and value < palette.length before using
palette[value], otherwise either (a) fall back to value intentionally with an
inline comment clarifying this behavior, or (b) log/throw an error for malformed
data; choose one approach and implement it so out-of-bounds palette accesses are
handled explicitly rather than relying on the nullish coalescing.
- Around line 70-71: The module-level Map named sectionBlockStates should be
made an instance property on the renderer/chunk-cache integration so each
renderer has its own block-state cache; locate all uses of sectionBlockStates in
chunkCacheIntegration.ts and replace them with a per-instance property (e.g.,
this.sectionBlockStates) initialized in the constructor or factory for the
renderer/integration class, update any helper functions to accept the instance
or bind them as methods so they reference the instance property instead of the
global, and remove the top-level const sectionBlockStates to avoid shared state
across renderer instances.
In `@renderer/viewer/lib/worldrendererCommon.ts`:
- Around line 846-859: The Promise.all call that awaits
currentLoadChunkBatch!.data.map(async (args) => { ... await
this.addColumn(...args) }) can reject and produce unobserved promise rejections;
change it so individual addColumn failures are handled (e.g., replace
Promise.all with Promise.allSettled or wrap each addColumn call in a try/catch
or .catch that logs the error) while still deleting the queuedChunks entry for
that args and preserving the finally cleanup of
queuedFunctions/currentLoadChunkBatch; update the async IIFE around
currentLoadChunkBatch, the map callback that calls this.addColumn, and any
logging to ensure failures are logged (include function names addColumn,
queuedChunks, queuedFunctions, currentLoadChunkBatch) instead of letting the
rejection propagate.
In `@src/chunkPacketCache.ts`:
- Around line 345-370: The clear() method currently iterates over
this.memoryCache.keys() and deletes matching keys, which is unnecessary—replace
that loop with a single this.memoryCache.clear() call to simplify and ensure the
in-memory cache is fully emptied; update the clear() implementation to call
this.memoryCache.clear() (referencing the clear() method and memoryCache
property) and rely on existing setServerInfo behavior for server switches.
- Around line 102-120: setServerInfo currently can lead to a race where callers
read metadata before loadMetadata finishes; ensure callers can reliably wait for
metadata by making setServerInfo set and return a cache-ready promise that
resolves only after loadMetadata completes (e.g., assign this.cacheStatePromise
= this.loadMetadata() and await it inside setServerInfo), and/or add an explicit
waitForMetadata or waitForCacheReady method that returns that promise; update
usages to await this.cacheStatePromise or the new wait method before calling
get() or getCachedChunksInfo().
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b3eae7c8-372d-4763-9d33-464ea01e063e
📒 Files selected for processing (13)
renderer/viewer/lib/chunkCacheIntegration.test.tsrenderer/viewer/lib/chunkCacheIntegration.tsrenderer/viewer/lib/worldrendererCommon.tssrc/blockHash.test.tssrc/blockHash.tssrc/chunkCacheReplay.test.tssrc/chunkCacheReplay.tssrc/chunkGeometryCache.test.tssrc/chunkGeometryCache.tssrc/chunkPacketCache.test.tssrc/chunkPacketCache.tssrc/customChannels.tsvitest.config.ts
✅ Files skipped from review due to trivial changes (2)
- vitest.config.ts
- src/chunkGeometryCache.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/customChannels.ts
There was a problem hiding this comment.
🧹 Nitpick comments (1)
renderer/viewer/lib/chunkCacheIntegration.ts (1)
187-189: Consider using nullish coalescing for semantic clarity.Using
?? nullinstead of|| nullmore precisely expresses the intent of convertingundefined(missing key) tonull, rather than relying on falsy coercion.♻️ Suggested refinement
export function getSectionBlockStates (sectionKey: string): Uint16Array | null { - return sectionBlockStates.get(sectionKey) || null + return sectionBlockStates.get(sectionKey) ?? null }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@renderer/viewer/lib/chunkCacheIntegration.ts` around lines 187 - 189, The function getSectionBlockStates currently returns sectionBlockStates.get(sectionKey) || null which can convert falsy values (like an empty Uint16Array) to null; change the expression to use the nullish coalescing operator (?? null) so only undefined/null becomes null and valid falsy values are preserved — update the return in getSectionBlockStates to use ?? null and keep the rest of the function intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@renderer/viewer/lib/chunkCacheIntegration.ts`:
- Around line 187-189: The function getSectionBlockStates currently returns
sectionBlockStates.get(sectionKey) || null which can convert falsy values (like
an empty Uint16Array) to null; change the expression to use the nullish
coalescing operator (?? null) so only undefined/null becomes null and valid
falsy values are preserved — update the return in getSectionBlockStates to use
?? null and keep the rest of the function intact.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a0fa275f-75e4-4bde-b64e-245e20270f44
📒 Files selected for processing (2)
renderer/viewer/lib/chunkCacheIntegration.test.tsrenderer/viewer/lib/chunkCacheIntegration.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- renderer/viewer/lib/chunkCacheIntegration.test.ts
Summary
Implements long-term chunk geometry caching to improve performance when returning to previously visited areas, addressing issue #476.
Features:
chunk-cachechannelImplementation Details:
src/chunkGeometryCache.ts: IndexedDB cache managerrenderer/viewer/lib/chunkCacheIntegration.ts: Cache utilitiessrc/customChannels.ts: Channel registrationminecraft-web-client:chunk-cachechannelrenderer/viewer/lib/worldrendererCommon.ts: Cache integrationHow It Works:
Test Plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Bug Fixes / Reliability
Tests