Inference Perf is a production-scale GenAI inference performance benchmarking tool that allows you to benchmark and analyze the performance of inference deployments. It is agnostic of model servers and can be used to measure performance and compare different systems apples-to-apples.
It was founded as a part of the inference benchmarking and metrics standardization effort in wg-serving to standardize the benchmark tooling and the metrics used to measure inference performance across the Kubernetes and model server communities.
- Comprehensive Latency Metrics: TTFT, TPOT, ITL, and Normalized TPOT.
- Throughput Tracking: Input, Output, and Total tokens per second.
- Goodput Measurement: Measure rate of requests meeting your SLO constraints. See goodput.md.
- Automatic Visualization: Generate charts for QPS vs Latency/Throughput/Goodput. See analysis.md.
- Real-world Datasets: Support for ShareGPT, CNN DailyMail, Infinity Instruct and Billsum.
- Synthetic & Random: Configure exact input/output distributions.
- Advanced Scenarios: Shared prefix and multi-turn chat conversations.
- Load Patterns: Constant rate, Poisson arrival, and concurrent user simulation.
- Multi-Stage Runs: Define stages with varying rates and durations to find saturation points.
- Trace Replay: Replay real-world traces (e.g., Azure dataset) or OpenTelemetry traces with agentic tree-of-thought simulation and visualization.
- 10k+ QPS: Scalable to very high load due to optimized multi-process architecture.
- Automatic Saturation Detection: Find the limits of your system via sweeps.
- Verified support for vLLM, SGLang, and TGI with server side aggregate metrics and time series metrics.
- Easily extensible to any OpenAI-compatible endpoint.
-
Install
inference-perf:pip install inference-perf
-
Run a benchmark with a simple random workload:
inference-perf --server.type vllm --server.base_url http://localhost:8000 --data.type random --load.type constant --load.stages '[{"rate": 10, "duration": 60}]' --api.streaming true
Alternatively, you can run using a configuration file:
inference-perf --config_file config.ymlWhen you run inference-perf, it displays a rich summary table in the CLI:
docker run -it --rm -v $(pwd)/config.yml:/workspace/config.yml quay.io/inference-perf/inference-perfRefer to the guide in /deploy.
Explore detailed documentation for specific topics:
| Topic | Description | Link |
|---|---|---|
| Configuration | Full YAML configuration schema and options. | config.md |
| CLI Flags | Overriding configuration via command line flags. | cli_flags.md |
| Load Generation | Detailed explanation of load patterns and multi-worker setup. | loadgen.md |
| Metrics | Definitions of TTFT, TPOT, ITL, etc. | metrics.md |
| Goodput | How to measure requests meeting SLOs. | goodput.md |
| Reports | Understanding generated JSON reports. | reports.md |
| OTel Observability | Instrument benchmark runs with OpenTelemetry tracing to export to Jaeger, Tempo, etc. | otel_instrumentation.md |
| OTel Trace Replay | Data/load type for replaying production traces with complex dependency graphs. | otel_trace_replay.md |
| Conversation Replay | Data/load type for benchmarking concurrent multi-turn agentic conversations with configurable distributions. | conversation_replay.md |
| Analysis | Visualizations and plots for performance metrics. | analysis.md |
We welcome contributions! Please join us:
- Slack: #inference-perf channel in Kubernetes workspace.
- Community Meeting: Weekly on Thursdays alternating between 09:00 and 11:30 PDT.
- Code of Conduct: Governed by the Kubernetes Code of Conduct.
See CONTRIBUTING.md for details on how to get started.

