Drop-in OpenAI SDK shim for routing OpenAI-compatible requests through EvalOps
llm-gateway with organization scope, principal attribution, audit, metering,
and governance hooks.
This package wraps the official OpenAI SDKs instead of reimplementing OpenAI wire formats, so teams can route model calls through EvalOps with minimal code changes.
pip install evalops-openaifrom evalops_openai import OpenAI
client = OpenAI(
organization_id="org_123",
principal="user:ada@example.com",
)
response = client.chat.completions.create(
**client.with_provider_ref(
{
"model": "gpt-4.1-mini",
"messages": [{"role": "user", "content": "hello"}],
}
)
)npm install @evalops/openaiimport { OpenAI } from "@evalops/openai";
const client = new OpenAI({
organizationId: "org_123",
principal: "user:ada@example.com",
});
const response = await client.chat.completions.create(
client.withProviderRef({
model: "gpt-4.1-mini",
messages: [{ role: "user", content: "hello" }],
}),
);EVALOPS_API_KEYorOPENAI_API_KEY: Platform-issued bearer token.EVALOPS_ORGANIZATION_ID: organization scope stamped into gateway requests.EVALOPS_PRINCIPAL: optional actor string for audit attribution.EVALOPS_TRACE_ID: optional trace correlation ID.EVALOPS_LLM_GATEWAY_URL: EvalOps LLM gateway base URL.EVALOPS_PROVIDER_ENVIRONMENT: defaults toprod.EVALOPS_PROVIDER_CREDENTIAL_NAME: optional provider ref credential name.EVALOPS_PROVIDER_TEAM_ID: optional provider ref team ID.
Use with_provider_ref or withProviderRef when a request should select a
specific provider credential. Organizations with default provider routing can
omit the helper and keep the vendor SDK call shape.