Component
Other
Problem Statement
Optional features in the dataviewer backend (object detection, video export) depend on
heavy optional packages (ultralytics/PyTorch ~2 GB, ffmpeg). Two problems arise:
-
No graceful degradation. When optional deps are missing, the routers are still
mounted and requests fail with unhandled ImportError at runtime. There is no
mechanism to explicitly enable or disable optional features, and no way for the
frontend to know which features are available before rendering UI for them.
-
Detection requires PyTorch. The detection service depends on ultralytics
which pulls in PyTorch (~2 GB). This is prohibitive for resource-constrained
or CPU-only deployments. An ONNX Runtime alternative would reduce the dependency
footprint from ~2 GB to ~100 MB while producing identical inference results from
the same YOLO11n model weights.
Proposed Solution
1. Feature flag system
Backend:
- Add env vars
ENABLE_OBJECT_DETECTION and ENABLE_EXPORT accepting
true / false / auto (default: auto)
- In
auto mode, attempt to import the required dependency and set the flag
accordingly (graceful degradation instead of crash)
- Add
GET /api/features endpoint returning the enabled state of each feature
- Conditionally mount detection and export routers only when their feature is enabled
Frontend:
- Add a
useFeatures() hook that queries GET /api/features once on app load
(cache with staleTime: Infinity)
- Conditionally render detection and export UI based on flag state
- Disabled features show no UI rather than broken buttons
2. ONNX detection backend
Add an alternative detection service using ONNX Runtime, selectable via
DETECTION_BACKEND=ultralytics|onnx (default: ultralytics for backward
compatibility).
ONNX pipeline:
- Letterbox preprocessing (resize to 640×640, normalize, HWC→CHW)
onnxruntime inference with graph optimization enabled
cv2.dnn.NMSBoxes() postprocessing
asyncio.to_thread() wrapper for non-blocking inference
- Dependencies:
onnxruntime >= 1.17.0, opencv-python-headless >= 4.10.0
- GPU support via
onnxruntime-gpu package swap
The API surface is unchanged — both backends produce the same DetectionResult model.
| Aspect |
ultralytics |
onnx |
| Dependency size |
~2 GB (PyTorch) |
~100 MB |
| GPU support |
CUDA via PyTorch |
CUDA via onnxruntime-gpu |
| CPU optimizations |
Standard |
Graph-level optimizations |
| Model updates |
Automatic with version bumps |
Manual re-export required |
| Security |
.pt files use pickle (A-03 risk) |
.onnx files are not pickle-based |
| Best for |
Development, full Ultralytics ecosystem |
Production, edge, CI |
Configuration:
| Env Var |
Default |
Description |
ENABLE_OBJECT_DETECTION |
auto |
Enable detection feature (true/false/auto) |
ENABLE_EXPORT |
auto |
Enable video export feature (true/false/auto) |
DETECTION_BACKEND |
ultralytics |
Detection inference backend |
DETECTION_MODEL |
yolo11n |
Model name (both backends use same architecture) |
DETECTION_CONFIDENCE |
0.25 |
Default confidence threshold |
Acceptance Criteria:
Alternatives Considered
- Build-time feature selection via Docker build args. Rejected because it requires
separate container images per feature combination and does not support runtime
toggling.
- Frontend-only feature hiding via env vars injected at build time. Rejected because
the backend routers would still crash on missing deps; the problem is backend-side.
- Remove detection feature entirely for lightweight deployments. Rejected because
the feature is valuable; the goal is to make it work with lighter deps, not remove it.
Additional Context
Component
Other
Problem Statement
Optional features in the dataviewer backend (object detection, video export) depend on
heavy optional packages (
ultralytics/PyTorch ~2 GB,ffmpeg). Two problems arise:No graceful degradation. When optional deps are missing, the routers are still
mounted and requests fail with unhandled
ImportErrorat runtime. There is nomechanism to explicitly enable or disable optional features, and no way for the
frontend to know which features are available before rendering UI for them.
Detection requires PyTorch. The detection service depends on
ultralyticswhich pulls in PyTorch (~2 GB). This is prohibitive for resource-constrained
or CPU-only deployments. An ONNX Runtime alternative would reduce the dependency
footprint from ~2 GB to ~100 MB while producing identical inference results from
the same YOLO11n model weights.
Proposed Solution
1. Feature flag system
Backend:
ENABLE_OBJECT_DETECTIONandENABLE_EXPORTacceptingtrue/false/auto(default:auto)automode, attempt to import the required dependency and set the flagaccordingly (graceful degradation instead of crash)
GET /api/featuresendpoint returning the enabled state of each featureFrontend:
useFeatures()hook that queriesGET /api/featuresonce on app load(cache with
staleTime: Infinity)2. ONNX detection backend
Add an alternative detection service using ONNX Runtime, selectable via
DETECTION_BACKEND=ultralytics|onnx(default:ultralyticsfor backwardcompatibility).
ONNX pipeline:
onnxruntimeinference with graph optimization enabledcv2.dnn.NMSBoxes()postprocessingasyncio.to_thread()wrapper for non-blocking inferenceonnxruntime >= 1.17.0,opencv-python-headless >= 4.10.0onnxruntime-gpupackage swapThe API surface is unchanged — both backends produce the same
DetectionResultmodel..ptfiles use pickle (A-03 risk).onnxfiles are not pickle-basedConfiguration:
ENABLE_OBJECT_DETECTIONautotrue/false/auto)ENABLE_EXPORTautotrue/false/auto)DETECTION_BACKENDultralyticsDETECTION_MODELyolo11nDETECTION_CONFIDENCE0.25Acceptance Criteria:
GET /api/featuresendpoint returns enabled state of optional featuresDetectionResultoutput as Ultralytics backendDETECTION_BACKENDenv varAlternatives Considered
separate container images per feature combination and does not support runtime
toggling.
the backend routers would still crash on missing deps; the problem is backend-side.
the feature is valuable; the goal is to make it work with lighter deps, not remove it.
Additional Context
The ONNX backend naturally mitigates the pickle deserialization risk (finding A-03)
since
.onnxfiles do not use pickle.[onnx])alongside the existing
[yolo]extra