Skip to content

feat: add long-term chunk geometry caching#529

Open
felmonon wants to merge 21 commits intozardoy:nextfrom
felmonon:feat/chunk-geometry-caching
Open

feat: add long-term chunk geometry caching#529
felmonon wants to merge 21 commits intozardoy:nextfrom
felmonon:feat/chunk-geometry-caching

Conversation

@felmonon
Copy link
Copy Markdown

@felmonon felmonon commented Mar 24, 2026

Summary

Implements long-term chunk geometry caching to improve performance when returning to previously visited areas, addressing issue #476.

Features:

  • IndexedDB-based persistent caching for servers that support the chunk-cache channel
  • In-memory LRU caching for servers without channel support
  • Automatic cache invalidation when blocks are modified
  • Cache hit/miss tracking for performance monitoring

Implementation Details:

  1. src/chunkGeometryCache.ts: IndexedDB cache manager

    • Stores serialized geometry data with LRU eviction
    • Configurable max cache size (500 IndexedDB, 100 memory)
    • Server-scoped caching to prevent cross-server conflicts
  2. renderer/viewer/lib/chunkCacheIntegration.ts: Cache utilities

    • Block hash computation for cache keys
    • Section block state tracking
    • Geometry validation
  3. src/customChannels.ts: Channel registration

    • Registers minecraft-web-client:chunk-cache channel
    • Enables persistent IndexedDB caching when server responds
  4. renderer/viewer/lib/worldrendererCommon.ts: Cache integration

    • Checks cache before sending geometry requests to workers
    • Saves generated geometry to cache
    • Invalidates cache on block updates

How It Works:

  • When a chunk section needs rendering, the system first checks the geometry cache
  • If a cache hit occurs, the cached geometry is used immediately (skipping worker processing)
  • If a cache miss occurs, workers generate geometry normally and the result is cached
  • When blocks change, the affected section's cache is invalidated
  • For servers supporting the chunk-cache channel, geometry is persisted to IndexedDB

Test Plan

  • Load a world and move around, verify chunks render normally
  • Return to previously visited chunks, verify faster loading (cache hits)
  • Modify blocks and verify the affected chunks re-render correctly
  • Reconnect to the same server and verify cached chunks still work

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • On-disk and in-memory chunk-geometry and chunk-packet caches with server-scoped persistence, plus a client↔server cache protocol for replaying cached chunks and seeding renderer preload for faster visuals.
    • Renderer now uses section-level geometry caching and can pre-seed cache entries during chunk load.
  • Bug Fixes / Reliability

    • Improved cache invalidation, short‑circuit cache-hit paths, safer fallbacks on mismatch/disk errors, TTL cleanup, and lifecycle clearing on reset/disconnect.
  • Tests

    • Extensive integration/unit tests covering decoding, hashing, cache persistence, replay, and invalidation.

Felmon Fekadu and others added 15 commits February 4, 2026 00:40
Implements chunk geometry caching to improve performance when returning
to previously visited areas:

- IndexedDB-based caching for servers that support chunk-cache channel
- In-memory caching with LRU eviction for other servers
- Automatic cache invalidation when blocks change
- Cache hit/miss tracking for performance monitoring

Files added:
- src/chunkGeometryCache.ts: IndexedDB cache manager with LRU eviction
- renderer/viewer/lib/chunkCacheIntegration.ts: Block hash computation utilities

Files modified:
- src/customChannels.ts: Register chunk-cache channel for server support detection
- renderer/viewer/lib/worldrendererCommon.ts: Integrate cache into geometry handling

This addresses issue zardoy#476 - longterm chunk geometry caching.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use numeric separators for FNV hash constants
- Use for-of loop instead of indexed for loop
- Use optional chaining for crypto.subtle check
- Use spread operator instead of Array.from
- Simplify boolean return in isGeometryCacheable

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove multi-space inline comments
- Add readonly modifier to memoryCache
- Use spread operator instead of Array.from()
- Add async keyword to promise-returning functions

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…uard

- Fix computeBlockHashAsync to pass typed array view instead of .buffer
- Add !useChangeWorker check to cache-hit shortcut in setSectionDirty
- Add sectionHashes validation in tryUseCachedGeometry
- Add WebCrypto guard with FNV-1a fallback in generateBlockHash
- Set serverAddress even when channel is unsupported for cache scoping

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add computeChunkDataHash to populate sectionHashes in addColumn
- Fix tryUseCachedGeometry to properly complete sectionFinished workflow
- Update clear() to only remove current server's entries, not all data
- Delete sectionHashes entries when invalidating section cache

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move eslint-disable comment to the correct line to properly suppress
unicorn/prefer-spread rule. ArrayLike<number> is not Iterable so we
must use Array.from() instead of spread operator.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add chunkPacketCache.ts for storing raw map_chunk packet data
- Redesign protocol per maintainer feedback:
  1. Client sends cached chunk hashes on login
  2. Server responds with cacheHit or hash for new chunks
  3. Client emits cached packets locally on cache hit
- Use FNV-1a hash algorithm (reproducible in Java)
- Cache both packet data and geometry for optimal performance

Co-Authored-By: Claude <noreply@anthropic.com>
- Replace direct IndexedDB API with browserfs fs.promises
- Store chunks as files: /data/chunk-cache/{server}/{x},{z}.bin
- Store metadata in metadata.json per server directory
- Server isolation via directory structure (fixes cross-server collision)
- Debounced metadata saves for better performance
- Maintain same public API for compatibility

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Migrate chunkGeometryCache.ts from IndexedDB to browserfs fs storage
- Fix channel registration to use custom_payload listener instead of try/catch
- Add clearAllBlockStates() calls in resetWorld/destroy to prevent memory leaks

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix import order (type imports before regular imports)
- Use replaceAll instead of replace with /g flag
- Improve map_chunk serialization to handle all version-specific fields
- Properly reconstruct typed arrays with correct constructor type

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add requestChunkResend function to recover when cache-hit fails
- Request chunk resend when cached data is missing or deserialization fails
- Fix ArrayBuffer alignment issues with Buffer.slice for proper bounds
- Ensure TextEncoder output has proper ArrayBuffer bounds

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add TTL-based cleanup for stale pending hash entries (30s timeout)
- Clear pendingChunkHashes on bot disconnect
- Add periodic cleanup interval (every 10s)
- Stop cleanup interval when bot ends

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix pendingChunkHashes memory leak (combine disconnect handlers)
- Add cache-hit failure recovery (request resend on corrupt cache)
- Add deserialize validation
- Add comprehensive JSDoc documentation
- Clarify server scoping in comments
- Handle pre-1.13 REGISTER channel

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Flush pending metadata saves before switching servers
- Move metadata updates inside serverSupportsChannel blocks to prevent unbounded growth
- Add type guards to computeChunkDataHash for safer input handling
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds in-memory and on-disk chunk geometry and packet caches, integrates section-keyed geometry reuse and invalidation into the renderer, and wires a client↔server chunk-cache protocol with hashing, packet serialization, and replay helpers.

Changes

Cohort / File(s) Summary
Chunk cache integration (viewer)
renderer/viewer/lib/chunkCacheIntegration.ts, renderer/viewer/lib/chunkCacheIntegration.test.ts
New module to decode per-section block-state arrays from serialized chunk JSON, store/retrieve per-section Uint16Array states, compute block/chunk hashes, parse/create section keys, and check geometry cacheability; includes unit tests for decoding, hashing, and store APIs.
Renderer integration
renderer/viewer/lib/worldrendererCommon.ts
Adds section-level geometry cache state (geometryCache, sectionHashes, invalidatedSections, counters), makes addColumn async and pre-warm section hashes, short-circuits worker generation when cached geometry is valid, adds invalidate/update helpers for block edits, and clears block-state cache on reset/destroy.
Persistent geometry cache (server-scoped)
src/chunkGeometryCache.ts, src/chunkGeometryCache.test.ts
New ChunkGeometryCache singleton persisting per-server section files + metadata.json, memory+disk lifecycle (init/get/set/invalidate/clear/flush), debounced metadata writes, eviction, and tests for persistence, isolation, stale-file handling, and hash generation.
Persistent packet cache & protocol
src/chunkPacketCache.ts, src/chunkPacketCache.test.ts, src/customChannels.ts
New ChunkPacketCache with per-server disk metadata and APIs (get/set/invalidate/clear/flush), deterministic packet hashing, plus channel registration and client↔server chunk-cache protocol handling cacheHit/cacheMiss, pending-hash tracking, map_chunk interception, and serialization helpers.
Packet replay helpers
src/chunkCacheReplay.ts, src/chunkCacheReplay.test.ts
Helpers to track and emit replayed map_chunk packets (WeakSet tracker, emit replayed events, consume tracker) with tests validating replay and consumption semantics.
Block hashing utility & tests
src/blockHash.ts, src/blockHash.test.ts
Adds FNV-style computeBlockStateHash returning stable 8-hex hash for block-state arrays and unit test asserting determinism across input forms.
Tests config
vitest.config.ts
Extended test include list to run new tests.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant WorldRenderer
    participant GeometryCache as GeometryCache
    participant Worker as GeometryWorker
    participant Server

    Client->>WorldRenderer: request chunk load (x,z)
    WorldRenderer->>GeometryCache: computeBlockHash / lookup sectionKey
    alt cache hit (hash matches)
        GeometryCache-->>WorldRenderer: cached geometry
        WorldRenderer->>WorldRenderer: emit synthetic 'geometry' message (simulate worker)
        WorldRenderer->>WorldRenderer: update bookkeeping (sectionsWaiting, etc.)
    else cache miss
        WorldRenderer->>Worker: dispatch geometry generation for section
        Worker-->>WorldRenderer: 'geometry' message (generated geometry)
        WorldRenderer->>GeometryCache: store geometry + blockHash
        GeometryCache->>GeometryCache: persist to memory (and disk if enabled)
    end

    rect rgba(100,150,255,0.5)
    Client->>Server: login handshake + advertise chunk-cache support
    Server-->>Client: report cacheHit / cacheMiss for specific chunk hashes
    alt server instructs to use cached packet
        Client->>Client: retrieve packet from ChunkPacketCache
        Client-->>Server: replay request or emit packet locally
    else server requests full chunk
        Client->>Server: send/relay map_chunk; Client->>ChunkPacketCache: store packet
    end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

Review effort 4/5

Poem

I nibble hashes and stash each chunk with care,
Paw-printed bytes tucked safely in my lair.
When worlds demand speed, I hop and bring—
Cached geometry ready for the spring. 🐇✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 61.54% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add long-term chunk geometry caching' directly and accurately summarizes the main feature implemented across the changeset—a persistent caching system for chunk geometry data.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
src/chunkGeometryCache.ts (1)

257-289: Add validation for transparent geometry fields.

The deserialization validates positions and indices but the transparent geometry arrays (t_positions, t_normals, etc.) are passed through without validation. If these are undefined in the serialized data but expected by consumers, it could cause runtime errors.

💡 Suggested validation
  deserializeGeometry (serialized: SerializedGeometry): MesherGeometryOutput {
    // Validate required fields exist
    if (!serialized.positions || !serialized.indices) {
      throw new Error('Serialized geometry missing required fields (positions or indices)')
    }

    return {
      sx: serialized.sx,
      sy: serialized.sy,
      sz: serialized.sz,
      positions: new Float32Array(serialized.positions),
      normals: new Float32Array(serialized.normals),
      colors: new Float32Array(serialized.colors),
      uvs: new Float32Array(serialized.uvs),
-     t_positions: serialized.t_positions,
-     t_normals: serialized.t_normals,
-     t_colors: serialized.t_colors,
-     t_uvs: serialized.t_uvs,
+     t_positions: serialized.t_positions ? new Float32Array(serialized.t_positions) : undefined,
+     t_normals: serialized.t_normals ? new Float32Array(serialized.t_normals) : undefined,
+     t_colors: serialized.t_colors ? new Float32Array(serialized.t_colors) : undefined,
+     t_uvs: serialized.t_uvs ? new Float32Array(serialized.t_uvs) : undefined,
renderer/viewer/lib/worldrendererCommon.ts (1)

1022-1028: Cache bypassed when useChangeWorker is true, even if cached geometry is valid.

The condition !useChangeWorker prevents cache usage for block update scenarios. While this may be intentional to ensure proper worker tracking, it means block updates always regenerate geometry even when neighboring sections haven't actually changed.

Consider whether this exclusion is necessary for all useChangeWorker cases, or only when the section itself was modified (which is already handled by invalidateSectionCache).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@renderer/viewer/lib/worldrendererCommon.ts` around lines 1022 - 1028, The
cache is being forcibly bypassed whenever useChangeWorker is true due to the `if
(value && !useChangeWorker && this.tryUseCachedGeometry(key))` check; change
this so cached geometry is still used for change-worker updates unless the
specific section was invalidated. Update the condition to allow cache hits when
useChangeWorker is true but the section is not marked dirty (e.g., replace
`!useChangeWorker` with a check like `!this.isSectionInvalidated(key)` or have
`invalidateSectionCache` set a per-section dirty flag that
`tryUseCachedGeometry(key)` or a new `isSectionInvalidated(key)` reads), and
keep the `logWorkerWork(() => `<- cache hit for section ${key}`)` and early
return behavior intact.
src/customChannels.ts (1)

889-919: Handle nested buffers in serialization.

The serializeMapChunkPacket function only processes top-level Buffer and TypedArray fields. If packet fields like blockEntities, heightmaps, or biomes contain nested objects with Buffer data (which is common in NBT/structured packet formats), those nested buffers will not be properly serialized and will be lost when JSON-encoded, causing deserialization to fail. Consider implementing recursive buffer detection and marking to handle arbitrarily nested buffer structures.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/customChannels.ts` around lines 889 - 919, serializeMapChunkPacket only
converts buffers/typed arrays at the top level so nested
Buffer/TypedArray/ArrayBuffer values (e.g., inside blockEntities, heightmaps,
biomes) are lost; change the implementation to walk the packet recursively
(handle objects and arrays) and replace any Buffer, ArrayBuffer or ArrayBuffer
view with the same marker objects you already use
({__type:'buffer'|'typedArray', arrayType?, data: [...]}) so nested buffers are
preserved for JSON.stringify and later reconstruction; update references to the
function serializeMapChunkPacket and ensure the traversal preserves
null/undefined and primitive values and avoids infinite loops (e.g., detect
cycles) while keeping the existing serialization shape.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@renderer/viewer/lib/worldrendererCommon.ts`:
- Around line 670-675: The current code uses computeChunkDataHash(chunk) to set
one hash for every sectionKey, causing section-level cache invalidation bugs;
update the logic that populates this.sectionHashes (used by
tryUseCachedGeometry) to compute and store a per-section hash instead: for each
section iterate the section's block-state Uint16Array (the same data stored in
storeSectionBlockStates) and call computeBlockHash on that per-section array,
then set this.sectionHashes.set(sectionKey, sectionHash) rather than the single
chunkHash; ensure references to computeChunkDataHash remain for chunk-wide use
only and that tryUseCachedGeometry continues comparing cached.hash to these
per-section hashes.

In `@src/chunkGeometryCache.ts`:
- Around line 36-60: The SerializedGeometry interface has a mismatched property
name: remove or rename transparentIndicesStart to match the actual property name
used in MesherGeometryOutput; locate MesherGeometryOutput to confirm the correct
identifier (e.g., transparentIndicesOffset or transparentStartIndex) and update
SerializedGeometry accordingly (in the interface definition), ensuring any code
that reads/writes this serialized field uses the exact same property name.
- Line 279: The code references a non-existent property transparentIndicesStart
on MesherGeometryOutput during serialization and deserialization; find the
actual property name on MesherGeometryOutput (e.g., transparentIndices* or
similar) and replace transparentIndicesStart in both the serialization block
that builds the serialized object and the deserialization site that reads
serialized.transparentIndicesStart so they use the correct property identifier;
update the serialization key and the corresponding deserialization access
together (search for occurrences of transparentIndicesStart and the
MesherGeometryOutput type to locate both places).

---

Nitpick comments:
In `@renderer/viewer/lib/worldrendererCommon.ts`:
- Around line 1022-1028: The cache is being forcibly bypassed whenever
useChangeWorker is true due to the `if (value && !useChangeWorker &&
this.tryUseCachedGeometry(key))` check; change this so cached geometry is still
used for change-worker updates unless the specific section was invalidated.
Update the condition to allow cache hits when useChangeWorker is true but the
section is not marked dirty (e.g., replace `!useChangeWorker` with a check like
`!this.isSectionInvalidated(key)` or have `invalidateSectionCache` set a
per-section dirty flag that `tryUseCachedGeometry(key)` or a new
`isSectionInvalidated(key)` reads), and keep the `logWorkerWork(() => `<- cache
hit for section ${key}`)` and early return behavior intact.

In `@src/customChannels.ts`:
- Around line 889-919: serializeMapChunkPacket only converts buffers/typed
arrays at the top level so nested Buffer/TypedArray/ArrayBuffer values (e.g.,
inside blockEntities, heightmaps, biomes) are lost; change the implementation to
walk the packet recursively (handle objects and arrays) and replace any Buffer,
ArrayBuffer or ArrayBuffer view with the same marker objects you already use
({__type:'buffer'|'typedArray', arrayType?, data: [...]}) so nested buffers are
preserved for JSON.stringify and later reconstruction; update references to the
function serializeMapChunkPacket and ensure the traversal preserves
null/undefined and primitive values and avoids infinite loops (e.g., detect
cycles) while keeping the existing serialization shape.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2bd731ef-302d-4a72-9ae2-0c3ae64c4053

📥 Commits

Reviewing files that changed from the base of the PR and between 5e90eb4 and bcb2dee.

📒 Files selected for processing (5)
  • renderer/viewer/lib/chunkCacheIntegration.ts
  • renderer/viewer/lib/worldrendererCommon.ts
  • src/chunkGeometryCache.ts
  • src/chunkPacketCache.ts
  • src/customChannels.ts

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/chunkGeometryCache.ts (1)

478-511: Consider batching file deletions for disk eviction.

The disk eviction loop deletes files sequentially, which could be slow when evicting many entries. Consider using Promise.all with a concurrency limit for better performance while avoiding I/O saturation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/chunkGeometryCache.ts` around lines 478 - 511, The evictOldEntries method
currently deletes section files sequentially which is slow; update
evictOldEntries to collect the candidate section entries (from
metadata.sections) and perform file deletions in parallel with a bounded
concurrency (e.g., use a small worker pool or a library like p-limit) while
still removing entries from this.memoryCache and delete-ing metadata.sections
immediately or atomically; use this.getSectionPath and existsViaStats inside
each deletion task and handle per-task errors without failing the whole batch,
then await Promise.all on the limited set of tasks and adjust the final debug
message to reflect the actual number of deletions completed.
renderer/viewer/lib/chunkCacheIntegration.ts (1)

261-286: Consider unique fallback hashes or skipping cache for invalid data.

When computeChunkDataHash returns the fallback "00000000" for invalid/unknown input types, multiple invalid chunks would share the same hash, potentially causing incorrect cache hits. Consider returning null to signal that caching should be skipped for this chunk.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@renderer/viewer/lib/chunkCacheIntegration.ts` around lines 261 - 286,
computeChunkDataHash currently returns a shared fallback string "00000000" for
invalid/unknown input, causing different invalid chunks to collide in cache;
change computeChunkDataHash to return null instead of "00000000" for the two
fallback branches (invalid data and unknown type), update its signature/type to
string | null, and then update all call sites that use computeChunkDataHash
(e.g., cache key generation and chunk cache lookup/insert code) to treat a null
return as "skip caching for this chunk" rather than using the hash (do not
attempt to insert or lookup in cache when null). Ensure logging still warns on
invalid data but do not produce a shared hash.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@renderer/viewer/lib/chunkCacheIntegration.ts`:
- Around line 207-222: The cache misses are caused by mismatched hashing:
generateBlockHash uses SHA-256 when WebCrypto is available while
computeBlockHash uses FNV-1a (8-char hex); update generateBlockHash to always
compute the same FNV-1a 8-character hex (e.g., by calling or reusing
computeBlockHash) so both sides produce identical hashes, and remove
computeBlockHashAsync if it is unused (or alternatively export generateBlockHash
and replace calls to computeBlockHash with it) to ensure one consistent hashing
implementation across the codebase.

---

Nitpick comments:
In `@renderer/viewer/lib/chunkCacheIntegration.ts`:
- Around line 261-286: computeChunkDataHash currently returns a shared fallback
string "00000000" for invalid/unknown input, causing different invalid chunks to
collide in cache; change computeChunkDataHash to return null instead of
"00000000" for the two fallback branches (invalid data and unknown type), update
its signature/type to string | null, and then update all call sites that use
computeChunkDataHash (e.g., cache key generation and chunk cache lookup/insert
code) to treat a null return as "skip caching for this chunk" rather than using
the hash (do not attempt to insert or lookup in cache when null). Ensure logging
still warns on invalid data but do not produce a shared hash.

In `@src/chunkGeometryCache.ts`:
- Around line 478-511: The evictOldEntries method currently deletes section
files sequentially which is slow; update evictOldEntries to collect the
candidate section entries (from metadata.sections) and perform file deletions in
parallel with a bounded concurrency (e.g., use a small worker pool or a library
like p-limit) while still removing entries from this.memoryCache and delete-ing
metadata.sections immediately or atomically; use this.getSectionPath and
existsViaStats inside each deletion task and handle per-task errors without
failing the whole batch, then await Promise.all on the limited set of tasks and
adjust the final debug message to reflect the actual number of deletions
completed.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c837d851-fa6c-404c-863a-05eef4b09822

📥 Commits

Reviewing files that changed from the base of the PR and between bcb2dee and 132859e.

📒 Files selected for processing (4)
  • renderer/viewer/lib/chunkCacheIntegration.ts
  • renderer/viewer/lib/worldrendererCommon.ts
  • src/chunkGeometryCache.ts
  • src/customChannels.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • renderer/viewer/lib/worldrendererCommon.ts

@felmonon
Copy link
Copy Markdown
Author

@zardoy CI is green on this PR now, including the build-and-deploy workflow. The earlier protocol/storage changes from the original review on #477 are also addressed in the current branch.

Is there anything else you want changed on the client side here, or is this mainly waiting on real server/in-game testing now?

@zardoy
Copy link
Copy Markdown
Owner

zardoy commented Mar 24, 2026

Is there anything else you want changed on the client side here, or is this mainly waiting on real server/in-game testing now?

I don't think so, unfortunately I didn't do in time while the project was in active development, the server side is not maintained anymore but if you want I can add to the typescript repo where these changes should be implemented and you can test it there if you want

@felmonon
Copy link
Copy Markdown
Author

Yes please! I'd love to implement and test the server-side changes in the TypeScript repo. Could you point me to where those changes should go?

- Extract shared FNV-1a hash into blockHash.ts (fixes hash mismatch:
  SHA-256 vs FNV-1a caused 100% cache misses when WebCrypto available)
- Extract replay tracking into chunkCacheReplay.ts; emit both 'packet'
  and 'map_chunk' events so mineflayer world loader processes cache hits
- Fix per-section hashing in worldrendererCommon.ts (was chunk-level)
- computeChunkDataHash returns null instead of '00000000' sentinel to
  prevent false cache hits across chunks with invalid data
- Parallelize evictOldEntries file deletions with Promise.all in both
  chunkGeometryCache and chunkPacketCache
- Deserialize transparent geometry arrays as Float32Array (not number[])
- Remove dead computeBlockHashAsync export (unused async wrapper)
- Add 63 tests across all cache modules covering: disk persistence,
  memory-only mode, hash mismatch, invalidation, server isolation,
  clear, transparent geometry round-trip, Uint32Array indices,
  stale-file cleanup, computePacketHash, hasValidCache,
  getCachedChunksInfo, section block state store/get/clear,
  parseSectionKey, isGeometryCacheable, computeChunkDataHash
@felmonon
Copy link
Copy Markdown
Author

felmonon commented Mar 31, 2026

Quick update: I pushed 6d6f87d to clean up the remaining review feedback and get the branch back to green.

This follow-up mostly tightens the edges around the cache work rather than changing the core approach. I cleaned up the new tests so lint passes, simplified ChunkPacketCache.clear(), made the palette fallback handling explicit, changed chunk section extraction to skip only bad sections instead of dropping the whole chunk, and added per-chunk error handling around addColumn() so failures are logged instead of turning into unobserved rejections.

The earlier cache fixes are still in place here too: shared block hashing, replayed map_chunk packets now go through the path mineflayer actually consumes, and the cache modules now have broad regression coverage. Local checks I reran after the follow-up:

  • pnpm lint
  • pnpm test-unit --run
  • pnpm exec tsc --noEmit

All of those pass locally, and build-and-deploy is green again.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (8)
renderer/viewer/lib/chunkCacheIntegration.test.ts (2)

3-15: Minor: Import order.

The lint hint suggests ../../../src/blockHash should be imported before ./chunkCacheIntegration for consistent ordering. This is a stylistic preference.

♻️ Optional fix
+import { computeBlockStateHash } from '../../../src/blockHash'
 import {
   computeBlockHash,
   computeChunkDataHash,
   createSectionKey,
   clearAllBlockStates,
   clearSectionBlockStates,
   extractChunkSectionBlockStates,
   getSectionBlockStates,
   isGeometryCacheable,
   parseSectionKey,
   storeSectionBlockStates,
 } from './chunkCacheIntegration'
-import { computeBlockStateHash } from '../../../src/blockHash'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@renderer/viewer/lib/chunkCacheIntegration.test.ts` around lines 3 - 15,
Reorder the imports so that external/module-relative imports come before
local-relative ones: move the import of computeBlockStateHash from
'../../../src/blockHash' to appear above the grouped import from
'./chunkCacheIntegration'. Update the top of chunkCacheIntegration.test.ts so
computeBlockStateHash is imported first (before functions like computeBlockHash,
computeChunkDataHash, createSectionKey, clearAllBlockStates,
clearSectionBlockStates, extractChunkSectionBlockStates, getSectionBlockStates,
isGeometryCacheable, parseSectionKey, storeSectionBlockStates) to satisfy the
lint ordering rule.

17-17: Using require() for prismarine-chunk.

The dynamic require() is likely intentional to support version-specific chunk loading at runtime. If the project uses ESM exclusively, consider using dynamic import() instead, but this may depend on prismarine-chunk's module format.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@renderer/viewer/lib/chunkCacheIntegration.test.ts` at line 17, The test uses
a CommonJS require for prismarine-chunk via the prismarineChunk variable which
won't work in an ESM-only codebase; replace the dynamic require with a dynamic
import (e.g., use await import('prismarine-chunk') where the test setup/runtime
can be async) and update uses to handle a possible default export (check
prismarine-chunk export shape and use imported.default if needed); ensure the
import is performed at runtime in the same place where prismarineChunk was
required so version-specific loading still works and tests remain
async-compatible.
src/chunkPacketCache.ts (2)

345-370: clear() iterates and deletes from memoryCache during iteration—safe in modern JS but consider simplification.

Deleting keys while iterating over Map.keys() is safe in JavaScript (the iterator is live and handles deletions). However, a simpler approach would be to just call this.memoryCache.clear() since setServerInfo already clears the whole map on server switch.

♻️ Optional simplification
 async clear (): Promise<void> {
-  // Clear memory cache for current server
-  const serverPrefix = `${this.serverAddress}:`
-  for (const key of this.memoryCache.keys()) {
-    if (key.startsWith(serverPrefix)) {
-      this.memoryCache.delete(key)
-    }
-  }
+  // Clear memory cache (only current server's entries exist after setServerInfo)
+  this.memoryCache.clear()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/chunkPacketCache.ts` around lines 345 - 370, The clear() method currently
iterates over this.memoryCache.keys() and deletes matching keys, which is
unnecessary—replace that loop with a single this.memoryCache.clear() call to
simplify and ensure the in-memory cache is fully emptied; update the clear()
implementation to call this.memoryCache.clear() (referencing the clear() method
and memoryCache property) and rely on existing setServerInfo behavior for server
switches.

102-120: Potential race: loadMetadata is async but callers may proceed immediately.

After setServerInfo returns, loadMetadata may still be in progress. If get() or getCachedChunksInfo() is called immediately, they may see an empty this.metadata.chunks even though disk entries exist.

The integration in customChannels.ts chains setServerInfo through cacheStatePromise and awaits it before packet handling, which mitigates this. However, direct callers of setServerInfo could encounter stale metadata reads if they don't await the returned promise.

Since the promise is awaited and the cache is designed to be eventually consistent (cache misses trigger re-fetch), this is acceptable, but worth noting for future callers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/chunkPacketCache.ts` around lines 102 - 120, setServerInfo currently can
lead to a race where callers read metadata before loadMetadata finishes; ensure
callers can reliably wait for metadata by making setServerInfo set and return a
cache-ready promise that resolves only after loadMetadata completes (e.g.,
assign this.cacheStatePromise = this.loadMetadata() and await it inside
setServerInfo), and/or add an explicit waitForMetadata or waitForCacheReady
method that returns that promise; update usages to await this.cacheStatePromise
or the new wait method before calling get() or getCachedChunksInfo().
renderer/viewer/lib/chunkCacheIntegration.ts (3)

217-227: All-or-nothing extraction may cause unnecessary cache fallback.

If any single section fails to decode (line 221 returns null), the entire chunk extraction fails and the caller falls back to computeChunkDataHash. This is conservative but means a single malformed section causes all sections to lose per-section hash precision.

Consider continuing with successfully decoded sections and only skipping the failed ones, or logging which section failed for debugging.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@renderer/viewer/lib/chunkCacheIntegration.ts` around lines 217 - 227, The
loop over parsedChunk.sections should not abort the whole extraction when
decodeChunkSectionBlockStates fails for one section; change the behavior in the
for-loop where decodeChunkSectionBlockStates(sectionValue) currently leads to an
early return null — instead, skip that section (continue), optionally log the
failed sectionIndex or section Y (using sectionIndex and minY + sectionIndex *
16) for debugging, and still populate sectionBlockStatesByY for successfully
decoded sections so callers get per-section hashes rather than falling back to
computeChunkDataHash for the entire chunk.

137-141: Consider handling out-of-bounds palette index more explicitly.

If palette exists but value exceeds palette.length, palette[value] returns undefined, and the nullish coalescing falls back to value. This is likely intentional for direct palette types, but the implicit fallback may mask malformed data.

Consider adding a bounds check or comment clarifying the expected behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@renderer/viewer/lib/chunkCacheIntegration.ts` around lines 137 - 141, The
loop in chunkCacheIntegration that maps values to blockStates uses
palette?.[value] ?? value which silently falls back when value is outside
palette bounds; update the logic in the for loop (the block that calls
getSerializedBitArrayValue and assigns blockStates[index]) to explicitly check
if palette is defined and value < palette.length before using palette[value],
otherwise either (a) fall back to value intentionally with an inline comment
clarifying this behavior, or (b) log/throw an error for malformed data; choose
one approach and implement it so out-of-bounds palette accesses are handled
explicitly rather than relying on the nullish coalescing.

70-71: Consider moving block state storage to per-instance scope for better separation of concerns.

sectionBlockStates is a module-level global Map shared across potential renderer instances. While the current architecture enforces a singleton renderer pattern (window.world), making this storage per-instance would improve separation of concerns and prevent issues if multi-renderer support is added in the future (e.g., split-screen, picture-in-picture).

This is a design consideration rather than a current bug, but scoping the cache per-renderer instance would be more resilient.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@renderer/viewer/lib/chunkCacheIntegration.ts` around lines 70 - 71, The
module-level Map named sectionBlockStates should be made an instance property on
the renderer/chunk-cache integration so each renderer has its own block-state
cache; locate all uses of sectionBlockStates in chunkCacheIntegration.ts and
replace them with a per-instance property (e.g., this.sectionBlockStates)
initialized in the constructor or factory for the renderer/integration class,
update any helper functions to accept the instance or bind them as methods so
they reference the instance property instead of the global, and remove the
top-level const sectionBlockStates to avoid shared state across renderer
instances.
renderer/viewer/lib/worldrendererCommon.ts (1)

846-859: Consider adding error handling for rejected addColumn calls.

Promise.all at line 848 will reject if any addColumn call throws. The finally block ensures cleanup runs, but the rejection propagates unhandled (the async IIFE result is voided at line 846). This could cause unobserved promise rejections.

Consider wrapping individual addColumn calls or adding a .catch() to log failures gracefully.

♻️ Proposed fix
             void (async () => {
               try {
                 await Promise.all(currentLoadChunkBatch!.data.map(async (args) => {
-                  this.queuedChunks.delete(`${args[0]},${args[1]}`)
-                  await this.addColumn(...args as Parameters<typeof this.addColumn>)
+                  try {
+                    this.queuedChunks.delete(`${args[0]},${args[1]}`)
+                    await this.addColumn(...args as Parameters<typeof this.addColumn>)
+                  } catch (err) {
+                    console.error(`Failed to load chunk ${args[0]},${args[1]}:`, err)
+                  }
                 }))
               } finally {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@renderer/viewer/lib/worldrendererCommon.ts` around lines 846 - 859, The
Promise.all call that awaits currentLoadChunkBatch!.data.map(async (args) => {
... await this.addColumn(...args) }) can reject and produce unobserved promise
rejections; change it so individual addColumn failures are handled (e.g.,
replace Promise.all with Promise.allSettled or wrap each addColumn call in a
try/catch or .catch that logs the error) while still deleting the queuedChunks
entry for that args and preserving the finally cleanup of
queuedFunctions/currentLoadChunkBatch; update the async IIFE around
currentLoadChunkBatch, the map callback that calls this.addColumn, and any
logging to ensure failures are logged (include function names addColumn,
queuedChunks, queuedFunctions, currentLoadChunkBatch) instead of letting the
rejection propagate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@renderer/viewer/lib/chunkCacheIntegration.ts`:
- Around line 31-32: ESLint flagged the import order: non-type imports must come
before type imports — in chunkCacheIntegration.ts move the non-type import
computeBlockStateHash so it appears above the type-only import
MesherGeometryOutput; update the imports so computeBlockStateHash is imported
first and MesherGeometryOutput (type) follows to satisfy the linter.

---

Nitpick comments:
In `@renderer/viewer/lib/chunkCacheIntegration.test.ts`:
- Around line 3-15: Reorder the imports so that external/module-relative imports
come before local-relative ones: move the import of computeBlockStateHash from
'../../../src/blockHash' to appear above the grouped import from
'./chunkCacheIntegration'. Update the top of chunkCacheIntegration.test.ts so
computeBlockStateHash is imported first (before functions like computeBlockHash,
computeChunkDataHash, createSectionKey, clearAllBlockStates,
clearSectionBlockStates, extractChunkSectionBlockStates, getSectionBlockStates,
isGeometryCacheable, parseSectionKey, storeSectionBlockStates) to satisfy the
lint ordering rule.
- Line 17: The test uses a CommonJS require for prismarine-chunk via the
prismarineChunk variable which won't work in an ESM-only codebase; replace the
dynamic require with a dynamic import (e.g., use await
import('prismarine-chunk') where the test setup/runtime can be async) and update
uses to handle a possible default export (check prismarine-chunk export shape
and use imported.default if needed); ensure the import is performed at runtime
in the same place where prismarineChunk was required so version-specific loading
still works and tests remain async-compatible.

In `@renderer/viewer/lib/chunkCacheIntegration.ts`:
- Around line 217-227: The loop over parsedChunk.sections should not abort the
whole extraction when decodeChunkSectionBlockStates fails for one section;
change the behavior in the for-loop where
decodeChunkSectionBlockStates(sectionValue) currently leads to an early return
null — instead, skip that section (continue), optionally log the failed
sectionIndex or section Y (using sectionIndex and minY + sectionIndex * 16) for
debugging, and still populate sectionBlockStatesByY for successfully decoded
sections so callers get per-section hashes rather than falling back to
computeChunkDataHash for the entire chunk.
- Around line 137-141: The loop in chunkCacheIntegration that maps values to
blockStates uses palette?.[value] ?? value which silently falls back when value
is outside palette bounds; update the logic in the for loop (the block that
calls getSerializedBitArrayValue and assigns blockStates[index]) to explicitly
check if palette is defined and value < palette.length before using
palette[value], otherwise either (a) fall back to value intentionally with an
inline comment clarifying this behavior, or (b) log/throw an error for malformed
data; choose one approach and implement it so out-of-bounds palette accesses are
handled explicitly rather than relying on the nullish coalescing.
- Around line 70-71: The module-level Map named sectionBlockStates should be
made an instance property on the renderer/chunk-cache integration so each
renderer has its own block-state cache; locate all uses of sectionBlockStates in
chunkCacheIntegration.ts and replace them with a per-instance property (e.g.,
this.sectionBlockStates) initialized in the constructor or factory for the
renderer/integration class, update any helper functions to accept the instance
or bind them as methods so they reference the instance property instead of the
global, and remove the top-level const sectionBlockStates to avoid shared state
across renderer instances.

In `@renderer/viewer/lib/worldrendererCommon.ts`:
- Around line 846-859: The Promise.all call that awaits
currentLoadChunkBatch!.data.map(async (args) => { ... await
this.addColumn(...args) }) can reject and produce unobserved promise rejections;
change it so individual addColumn failures are handled (e.g., replace
Promise.all with Promise.allSettled or wrap each addColumn call in a try/catch
or .catch that logs the error) while still deleting the queuedChunks entry for
that args and preserving the finally cleanup of
queuedFunctions/currentLoadChunkBatch; update the async IIFE around
currentLoadChunkBatch, the map callback that calls this.addColumn, and any
logging to ensure failures are logged (include function names addColumn,
queuedChunks, queuedFunctions, currentLoadChunkBatch) instead of letting the
rejection propagate.

In `@src/chunkPacketCache.ts`:
- Around line 345-370: The clear() method currently iterates over
this.memoryCache.keys() and deletes matching keys, which is unnecessary—replace
that loop with a single this.memoryCache.clear() call to simplify and ensure the
in-memory cache is fully emptied; update the clear() implementation to call
this.memoryCache.clear() (referencing the clear() method and memoryCache
property) and rely on existing setServerInfo behavior for server switches.
- Around line 102-120: setServerInfo currently can lead to a race where callers
read metadata before loadMetadata finishes; ensure callers can reliably wait for
metadata by making setServerInfo set and return a cache-ready promise that
resolves only after loadMetadata completes (e.g., assign this.cacheStatePromise
= this.loadMetadata() and await it inside setServerInfo), and/or add an explicit
waitForMetadata or waitForCacheReady method that returns that promise; update
usages to await this.cacheStatePromise or the new wait method before calling
get() or getCachedChunksInfo().
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b3eae7c8-372d-4763-9d33-464ea01e063e

📥 Commits

Reviewing files that changed from the base of the PR and between 1ce6368 and bec2b1f.

📒 Files selected for processing (13)
  • renderer/viewer/lib/chunkCacheIntegration.test.ts
  • renderer/viewer/lib/chunkCacheIntegration.ts
  • renderer/viewer/lib/worldrendererCommon.ts
  • src/blockHash.test.ts
  • src/blockHash.ts
  • src/chunkCacheReplay.test.ts
  • src/chunkCacheReplay.ts
  • src/chunkGeometryCache.test.ts
  • src/chunkGeometryCache.ts
  • src/chunkPacketCache.test.ts
  • src/chunkPacketCache.ts
  • src/customChannels.ts
  • vitest.config.ts
✅ Files skipped from review due to trivial changes (2)
  • vitest.config.ts
  • src/chunkGeometryCache.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/customChannels.ts

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
renderer/viewer/lib/chunkCacheIntegration.ts (1)

187-189: Consider using nullish coalescing for semantic clarity.

Using ?? null instead of || null more precisely expresses the intent of converting undefined (missing key) to null, rather than relying on falsy coercion.

♻️ Suggested refinement
 export function getSectionBlockStates (sectionKey: string): Uint16Array | null {
-  return sectionBlockStates.get(sectionKey) || null
+  return sectionBlockStates.get(sectionKey) ?? null
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@renderer/viewer/lib/chunkCacheIntegration.ts` around lines 187 - 189, The
function getSectionBlockStates currently returns
sectionBlockStates.get(sectionKey) || null which can convert falsy values (like
an empty Uint16Array) to null; change the expression to use the nullish
coalescing operator (?? null) so only undefined/null becomes null and valid
falsy values are preserved — update the return in getSectionBlockStates to use
?? null and keep the rest of the function intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@renderer/viewer/lib/chunkCacheIntegration.ts`:
- Around line 187-189: The function getSectionBlockStates currently returns
sectionBlockStates.get(sectionKey) || null which can convert falsy values (like
an empty Uint16Array) to null; change the expression to use the nullish
coalescing operator (?? null) so only undefined/null becomes null and valid
falsy values are preserved — update the return in getSectionBlockStates to use
?? null and keep the rest of the function intact.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a0fa275f-75e4-4bde-b64e-245e20270f44

📥 Commits

Reviewing files that changed from the base of the PR and between bec2b1f and 32aa7c8.

📒 Files selected for processing (2)
  • renderer/viewer/lib/chunkCacheIntegration.test.ts
  • renderer/viewer/lib/chunkCacheIntegration.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • renderer/viewer/lib/chunkCacheIntegration.test.ts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants