This project is easiest to profile by running the built backend directly with Node's profiler enabled. That gives you:
.cpuprofileCPU traces you can inspect in Chrome DevTools.heapprofileallocation profiles.heapsnapshotcaptures on demand- optional live inspection through the Node inspector
Do not profile pnpm dev for backend performance work. The tsx watch wrapper and combined frontend/backend process tree will add noise to the samples.
From the repo root:
cd /Users/sawyerhood/middleman
pnpm build
mkdir -p .profiles/backendUse an isolated profile data directory when you want clean captures without touching your normal app state:
MIDDLEMAN_INSTALL_DIR=/Users/sawyerhood/middleman \
MIDDLEMAN_PROJECT_ROOT=/Users/sawyerhood/middleman \
MIDDLEMAN_HOME=/Users/sawyerhood/middleman/.profiles/backend/data \
MIDDLEMAN_PORT=48387 \
node --enable-source-maps \
--inspect=127.0.0.1:9230 \
--cpu-prof \
--cpu-prof-dir=/Users/sawyerhood/middleman/.profiles/backend \
--heap-prof \
--heap-prof-dir=/Users/sawyerhood/middleman/.profiles/backend \
--heapsnapshot-signal=SIGUSR2 \
apps/backend/dist/index.jsIf you want to keep it in the background:
MIDDLEMAN_INSTALL_DIR=/Users/sawyerhood/middleman \
MIDDLEMAN_PROJECT_ROOT=/Users/sawyerhood/middleman \
MIDDLEMAN_HOME=/Users/sawyerhood/middleman/.profiles/backend/data \
MIDDLEMAN_PORT=48387 \
node --enable-source-maps \
--inspect=127.0.0.1:9230 \
--cpu-prof \
--cpu-prof-dir=/Users/sawyerhood/middleman/.profiles/backend \
--heap-prof \
--heap-prof-dir=/Users/sawyerhood/middleman/.profiles/backend \
--heapsnapshot-signal=SIGUSR2 \
apps/backend/dist/index.js &
echo $! > .profiles/backend/backend.pidIf the slowdown only reproduces with your real data, point MIDDLEMAN_HOME at your real app home:
MIDDLEMAN_INSTALL_DIR=/Users/sawyerhood/middleman \
MIDDLEMAN_PROJECT_ROOT=/Users/sawyerhood/middleman \
MIDDLEMAN_HOME=/Users/sawyerhood/.middleman \
MIDDLEMAN_PORT=48387 \
node --enable-source-maps \
--inspect=127.0.0.1:9230 \
--cpu-prof \
--cpu-prof-dir=/Users/sawyerhood/middleman/.profiles/backend \
--heap-prof \
--heap-prof-dir=/Users/sawyerhood/middleman/.profiles/backend \
--heapsnapshot-signal=SIGUSR2 \
apps/backend/dist/index.jsNotes:
- Make sure the normal app is not already using the same DB files.
- The first startup after schema or storage changes may be slower because migrations can run against the live DB.
- Large live datasets can produce large profile artifacts.
With --inspect=127.0.0.1:9230 enabled:
- Open
chrome://inspect - Click
Configure...and confirm127.0.0.1:9230is listed - Click
Open dedicated DevTools for Node
Use:
PerformanceorProfilerto record CPU while reproducing the slowdownMemoryto take heap snapshots
Keep the --cpu-prof and --heap-prof flags enabled even when using DevTools so you also get files on disk to inspect later.
- Start the built backend with the profiling flags above.
- Reproduce the slowdown exactly as you normally hit it.
- While it is slow, trigger a heap snapshot:
kill -USR2 "$(cat /Users/sawyerhood/middleman/.profiles/backend/backend.pid)"- Optional on macOS, capture a sampled stack trace:
sample "$(cat /Users/sawyerhood/middleman/.profiles/backend/backend.pid)" 10 -file /Users/sawyerhood/middleman/.profiles/backend/backend.sample.txt- Stop the backend cleanly so Node flushes the profile files:
kill "$(cat /Users/sawyerhood/middleman/.profiles/backend/backend.pid)"Collect the files under /Users/sawyerhood/middleman/.profiles/backend/, especially:
*.cpuprofile*.heapprofile*.heapsnapshotbackend.sample.txt
When sharing a capture for analysis, also include:
- exact repro steps
- how many tabs or clients were connected
- how many managers and workers were active
- whether the run used isolated profile data or the live
~/.middlemandatabase
The capture that led to the backend fixes in this repo used:
- a built backend, not
pnpm dev - Node CPU and heap profiling flags
- optional Chrome DevTools attach through the Node inspector
- a macOS
sampletrace for a quick stack snapshot - direct inspection of the live SQLite database at
/Users/sawyerhood/.middleman/swarmd.db
That combination was enough to show heavy JSON parsing, allocation pressure, and oversized persisted agent_tool_call rows dominating transcript and history work.