Multi-node ESP32-S3 repro on Windows: 6 provisioned nodes transmit, but every UDP packet decodes as node_id = 1
I reproduced what looks like the same node identity failure discussed in #375 , but with 6 ESP32-S3 boards on RuView v0.7.0 .
Date
Reproduced on April 13, 2026
v0.7.0 release date: April 6, 2026
Environment
Host OS: Windows
Repo: ruvnet/RuView
Version used: v0.7.0
Verification server: local rust-port/wifi-densepose-rs/target/release/sensing-server.exe
Host WiFi IP: 192.168.1.132
UDP port: 5005
WiFi SSID: Gavriuwa
Hardware
6 x ESP32-S3, all detected by esptool as ESP32-S3 with 16MB flash .
Provisioned mapping:
node-id 0 -> COM11
node-id 1 -> COM15
node-id 2 -> COM14
node-id 3 -> COM16
node-id 4 -> COM17
node-id 5 -> COM18
Each node was provisioned with:
unique --node-id
matching --tdm-slot
common --tdm-total 6
--target-ip 192.168.1.132
--target-port 5005
Exact steps
Flash all six boards with the release binaries from firmware/esp32-csi-node/release_bins.
Provision all six boards with unique node-id 0..5.
Let all six boards connect to WiFi and transmit to 192.168.1.132:5005.
Capture UDP traffic directly on the Windows host before Docker.
Run the local sensing server with:
.\rust- port\wifi- densepose- rs\target\release\sensing-server.exe -- source esp32 -- http- port 3000 -- ws- port 3001 -- udp- port 5005
Expected result
Packets from each board should decode with their provisioned node_id
The sensing server should track multiple logical nodes
Actual result
All 6 boards transmit successfully
UDP capture shows 6 unique source IPs
But every decoded UDP packet carries node_id = 1
The sensing server only sees a single logical node because all streams collapse onto node 1
Packet-level evidence
30-second UDP capture summary:
total packets: 3614
packet kinds:
csi: 3432
vitals: 179
feature: 3
source IPs observed:
192.168.1.37
192.168.1.49
192.168.1.61
192.168.1.77
192.168.1.101
192.168.1.142
decoded node IDs observed:
Example packet records:
{"source_ip" :" 192.168.1.37" ,"kind" :" csi" ,"node_id" :1 ,"n_subcarriers" :64 ,"freq_mhz" :2427 }
{"source_ip" :" 192.168.1.61" ,"kind" :" csi" ,"node_id" :1 ,"n_subcarriers" :64 ,"freq_mhz" :2427 }
{"source_ip" :" 192.168.1.49" ,"kind" :" csi" ,"node_id" :1 ,"n_subcarriers" :64 ,"freq_mhz" :2427 }
{"source_ip" :" 192.168.1.77" ,"kind" :" csi" ,"node_id" :1 ,"n_subcarriers" :64 ,"freq_mhz" :2427 }
{"source_ip" :" 192.168.1.101" ,"kind" :" csi" ,"node_id" :1 ,"n_subcarriers" :64 ,"freq_mhz" :2427 }
{"source_ip" :" 192.168.1.142" ,"kind" :" csi" ,"node_id" :1 ,"n_subcarriers" :64 ,"freq_mhz" :2427 }
App-level evidence
The local sensing server starts and returns valid JSON from:
GET /api/v1/sensing/latest
GET /api/v1/pose/current
But /api/v1/sensing/latest only reports one node entry:
"nodes" :[{"node_id" :1 ,"position" :[2.0 ,0.0 ,1.5 ],"rssi_dbm" :-10.0 ,"subcarrier_count" :0 }]
So the transport is alive, but node identity is not.
Serial evidence
Serial logs from multiple boards show active CSI collection, for example:
repeated csi_collector callbacks
edge processing output such as vitals / calibration
So this does not look like a dead-node or WiFi-join failure.
Logs available
I have full logs for:
probe
flash
provisioning
serial capture
UDP capture
sensing server stdout
/api/v1/sensing/latest
/api/v1/pose/current
Generated log directories:
logs/20260412-221618-probe
logs/20260412-221642-flash
logs/20260412-221741-provision
logs/20260412-221757-serial-capture
logs/20260412-221905-udp-capture
logs/20260412-222012-server-check
Notes
I intentionally validated with the local Windows binary first, not Docker, because Docker Desktop drops UDP from multiple ESP32 sources #374 reports multi-source UDP issues on Docker Desktop for Windows.
This repro still points to node identity collapsing before fusion, which makes multi-node pose / sensing results unreliable.
Question
Is there a known fix beyond #375 / PR #232 for the firmware path that populates the UDP packet node_id field, or a way to verify from serial logs that the transmitted packet header is using the provisioned NVS value rather than a default/static value?
Multi-node ESP32-S3 repro on Windows: 6 provisioned nodes transmit, but every UDP packet decodes as
node_id = 1I reproduced what looks like the same node identity failure discussed in #375, but with 6 ESP32-S3 boards on RuView v0.7.0.
Date
v0.7.0release date: April 6, 2026Environment
ruvnet/RuViewv0.7.0rust-port/wifi-densepose-rs/target/release/sensing-server.exe192.168.1.1325005GavriuwaHardware
6 x ESP32-S3, all detected by
esptoolas ESP32-S3 with 16MB flash.Provisioned mapping:
0->COM111->COM152->COM143->COM164->COM175->COM18Each node was provisioned with:
--node-id--tdm-slot--tdm-total 6--target-ip 192.168.1.132--target-port 5005Exact steps
firmware/esp32-csi-node/release_bins.node-id 0..5.192.168.1.132:5005.Expected result
node_idActual result
node_id = 11Packet-level evidence
30-second UDP capture summary:
3614csi:3432vitals:179feature:3192.168.1.37192.168.1.49192.168.1.61192.168.1.77192.168.1.101192.168.1.1421Example packet records:
{"source_ip":"192.168.1.37","kind":"csi","node_id":1,"n_subcarriers":64,"freq_mhz":2427} {"source_ip":"192.168.1.61","kind":"csi","node_id":1,"n_subcarriers":64,"freq_mhz":2427} {"source_ip":"192.168.1.49","kind":"csi","node_id":1,"n_subcarriers":64,"freq_mhz":2427} {"source_ip":"192.168.1.77","kind":"csi","node_id":1,"n_subcarriers":64,"freq_mhz":2427} {"source_ip":"192.168.1.101","kind":"csi","node_id":1,"n_subcarriers":64,"freq_mhz":2427} {"source_ip":"192.168.1.142","kind":"csi","node_id":1,"n_subcarriers":64,"freq_mhz":2427}App-level evidence
The local sensing server starts and returns valid JSON from:
GET /api/v1/sensing/latestGET /api/v1/pose/currentBut
/api/v1/sensing/latestonly reports one node entry:So the transport is alive, but node identity is not.
Serial evidence
Serial logs from multiple boards show active CSI collection, for example:
csi_collectorcallbacksSo this does not look like a dead-node or WiFi-join failure.
Logs available
I have full logs for:
/api/v1/sensing/latest/api/v1/pose/currentGenerated log directories:
logs/20260412-221618-probelogs/20260412-221642-flashlogs/20260412-221741-provisionlogs/20260412-221757-serial-capturelogs/20260412-221905-udp-capturelogs/20260412-222012-server-checkNotes
Question
Is there a known fix beyond #375 / PR #232 for the firmware path that populates the UDP packet
node_idfield, or a way to verify from serial logs that the transmitted packet header is using the provisioned NVS value rather than a default/static value?