Archive note: This document is a historical record. File paths and implementation names can reference code that has since moved or been removed.
Completed in commit 4c8345a7.
As of 2026-04-11, the rollback path is retired. See ../legacy-llm-provider-runtime-retirement/spec.md.
Unify DeepChat's low-level LLM request pipeline on Vercel AI SDK while keeping the upper-layer contracts unchanged:
BaseLLMProviderLLMProviderPresenterLLMCoreStreamEvent- existing provider IDs, model configs, and conversation history
The AI SDK runtime is the only remaining implementation.
- No functional regression in text streaming, reasoning streaming, tool call streaming, image output, prompt cache, proxy handling, request tracing, routing, and embeddings.
LLMCoreStreamEventevent names, field names, and stop reasons remain unchanged.- Existing
function_call_recordhistory must stay reusable across providers. - Existing provider list / model list / provider check / key status responsibilities remain in provider classes.
- Single runtime:
ai-sdk DEEPCHAT_LLM_RUNTIMEhas been removed- config setting
llmRuntimeModehas been removed
Shared runtime under src/main/presenter/llmProviderPresenter/aiSdk/ provides:
- provider factory
- model / message mapper
- MCP tool mapper
- streaming adapter
- image runtime
- embedding runtime
- provider-options mapper
- reasoning middleware
- legacy function-call compatibility middleware
Phase 1:
OpenAICompatibleProviderOpenAIResponsesProvider- all
extends OpenAICompatibleProviderproviders
Phase 2:
AnthropicProviderGeminiProviderVertexProviderAwsBedrockProviderOllamaProvider
Phase 3:
NewApiProviderZenmuxProvider
Out of scope for first unification pass:
AcpProviderVoiceAIProvider
- pure text
- reasoning native
- reasoning via
<think> - native tool streaming
- legacy
<function_call>fallback - multi-tool history replay
- image input
- image output
- usage mapping
- prompt cache mapping
- proxy / trace / abort
- embeddings
- retired rollback path verification
- AI SDK runtime passes the provider regression matrix
- duplicated legacy stream parsers / tool parsers have no remaining callers
- retirement is documented in ../legacy-llm-provider-runtime-retirement/spec.md