Lemon uses a single canonical configuration file in TOML format. Configuration is layered:
- Global:
~/.lemon/config.toml - Project:
<project>/.lemon/config.toml(overrides global) - Environment variables (override file values;
.envmay auto-populate missing env vars at startup) - Lemon secrets referenced from config (for secret-backed fields)
Runtime state and policy are separate from config. Per-session or per-route "current"
model/thinking values override config defaults at runtime, but they are not persisted in
config.toml.
[providers.anthropic]
api_key = "sk-ant-..."
[providers.openai]
api_key = "sk-..."
[providers.opencode]
api_key = "opencode-..."
base_url = "https://opencode.ai/zen/v1"
[defaults]
provider = "anthropic"
model = "anthropic:claude-sonnet-4-20250514"
thinking_level = "medium"
engine = "lemon"
[runtime.compaction]
enabled = true
reserve_tokens = 16384
keep_recent_tokens = 20000
[runtime.retry]
enabled = true
max_retries = 3
base_delay_ms = 1000
[runtime.cli.codex]
extra_args = ["-c", "notify=[]"]
# Local main sessions default to auto-approve when this is unset.
auto_approve = true
[runtime.cli.opencode]
# Optional model override passed to `opencode run --model`.
model = "gpt-4.1"
[runtime.cli.pi]
# Optional extra flags prepended to the `pi` command.
extra_args = []
# Optional provider/model overrides passed to `pi --provider/--model`.
provider = "openai"
model = "gpt-4.1"
[runtime.cli.claude]
dangerously_skip_permissions = true
[runtime.tools.web.search]
provider = "brave" # "brave" | "perplexity"
cache_ttl_minutes = 15
[runtime.tools.web.search.perplexity]
model = "perplexity/sonar-pro"
[runtime.tools.web.fetch]
cache_ttl_minutes = 15
allow_private_network = false
allowed_hostnames = []
[runtime.tools.web.fetch.firecrawl]
enabled = true
[runtime.tools.wasm]
enabled = false
auto_build = true
runtime_path = ""
tool_paths = []
default_memory_limit = 10485760
default_timeout_ms = 60000
default_fuel_limit = 10000000
cache_compiled = true
cache_dir = ""
max_tool_invoke_depth = 4
[tui]
theme = "lemon"
debug = false
[logging]
# Optional: write logs to a file for later analysis.
# If unset/empty, file logging is disabled and logs go to stdout/stderr only.
file = "~/.lemon/log/lemon.log"
# Optional: handler level for the file (defaults to "debug").
level = "debug"
[gateway]
max_concurrent_runs = 2
default_engine = "lemon"
auto_resume = false
enable_telegram = false
enable_xmtp = false
[gateway.telegram]
bot_token = "123456:token"
allowed_chat_ids = [12345678]
[gateway.xmtp]
env = "production" # production | dev | local
wallet_address = "${XMTP_WALLET_ADDRESS}"
wallet_key_secret = "xmtp_wallet_key"
db_path = "~/.lemon/xmtp-db"
poll_interval_ms = 1500
connect_timeout_ms = 15000
require_live = true
mock_mode = false
[gateway.voice]
enabled = false
websocket_port = 4047
public_url = "https://example.com"
twilio_account_sid_secret = "twilio_account_sid"
twilio_auth_token_secret = "twilio_auth_token"
twilio_phone_number = "+1234567890"
deepgram_api_key_secret = "deepgram_api_key"
elevenlabs_api_key_secret = "elevenlabs_api_key"
elevenlabs_voice_id = "21m00Tcm4TlvDq8ikWAM"
elevenlabs_output_format = "ulaw_8000"
llm_model = "gpt-4o-mini"
max_call_duration_seconds = 600
silence_timeout_ms = 5000
[profiles.default]
name = "Daily Assistant"
system_prompt = "You are my daily assistant."
[profiles.default.tool_policy]
# Optional preset profile:
# profile = "minimal_core" # full_access | minimal_core | read_only | safe_mode | subagent_restricted | no_external | custom
allow = "all"
deny = []
# Optional for stricter remote/channel use:
# require_approval = ["bash", "write", "edit"]
no_reply = false
[[gateway.bindings]]
transport = "telegram"
chat_id = 12345678
agent_id = "default"Environment variables override file values. Common overrides:
LEMON_DEFAULT_PROVIDER,LEMON_DEFAULT_MODELLEMON_THEME,LEMON_DEBUG<PROVIDER>_API_KEY,<PROVIDER>_BASE_URL(e.g.,ANTHROPIC_API_KEY,OPENAI_BASE_URL,OPENCODE_API_KEY)LEMON_CODEX_EXTRA_ARGS,LEMON_CODEX_AUTO_APPROVELEMON_CLAUDE_YOLOLEMON_WASM_ENABLED,LEMON_WASM_RUNTIME_PATH,LEMON_WASM_TOOL_PATHS,LEMON_WASM_AUTO_BUILDLEMON_LOG_FILE,LEMON_LOG_LEVELBRAVE_API_KEY,PERPLEXITY_API_KEY,OPENROUTER_API_KEY,FIRECRAWL_API_KEY
Use only these top-level sections:
defaultsruntimeprofiles.<agent_id>providers.<name>gatewaymeshtuilogging
Deprecated sections now fail validation and runtime loading:
[agent]-> move defaults to[defaults]and runtime settings to[runtime][agents.<id>]-> move to[profiles.<id>][agent.tools.*]-> move to[runtime.tools.*][tools.*]-> move to[runtime.tools.*]
Lemon can auto-load a .env file at startup:
./bin/lemon-dev/lemon-tui: loads<cwd>/.envwhere<cwd>is the agent working directory (--cwd, or current directory).clients/lemon-web/serverbridge: loads<cwd>/.envfrom--cwd(or current directory)../bin/lemon-gateway: loads.envfrom the directory where you launch the script.
By default, existing environment variables are preserved. .env values only fill missing variables.
Trusted-peer Lemon Mesh replication is configured under [mesh]:
[mesh]
trusted_peers = ["peer-a@host", "peer-b@host"]
replication_poll_interval_ms = 1000
replication_batch_limit = 200
snapshot_scope = "mesh_state"
lease_ttl_ms = 60000Environment overrides:
LEMON_MESH_TRUSTED_PEERSas a comma-separated listLEMON_MESH_REPLICATION_POLL_INTERVAL_MSLEMON_MESH_REPLICATION_BATCH_LIMITLEMON_MESH_SNAPSHOT_SCOPELEMON_MESH_LEASE_TTL_MS
Current v1 semantics are intentionally narrow:
- trusted peers only
- pull-only replication
- config-reload-driven membership
- no push path or federation
Lemon supports the Codex subscription provider as openai-codex (it uses the ChatGPT OAuth JWT, not OPENAI_API_KEY).
Primary setup paths:
mix lemon.onboard
mix lemon.onboard.codexWhat it does:
- Resolves Codex OAuth credentials via
Ai.Auth.OpenAICodexOAuth - Stores credentials in encrypted secrets
- Writes
providers.openai-codex.auth_source = "oauth"plusproviders.openai-codex.oauth_secret - Optionally updates
[defaults]provider/model - Uses an interactive arrow-key TUI for selection steps when running in a real terminal
- Listens on the localhost OAuth callback automatically and falls back to manual paste only if the callback cannot be captured
The onboarding flow opens the OpenAI auth URL directly and stores the returned OAuth credentials in Lemon secrets.
To force a token explicitly, set:
OPENAI_CODEX_API_KEY(preferred)CHATGPT_TOKEN(fallback)
Minimal config needed for the control-plane Codex smoke:
[runtime.cli.codex]
extra_args = ["-c", "notify=[]"]
[gateway]
default_engine = "lemon"Preflight it locally before running the smoke:
./scripts/control_plane_codex_smoke.mjs --check-onlyThis smoke targets the Codex CLI engine path. It does not require
providers.openai-codex unless you are separately using the provider-backed
Responses API path in the same environment.
Lemon includes a top-level onboarding picker for provider credentials:
mix lemon.onboard
mix lemon.onboard anthropic
mix lemon.onboard codex
# Provider-specific aliases still work
mix lemon.onboard.antigravity
mix lemon.onboard.codex
mix lemon.onboard.copilotAll onboarding flows:
- Verify encrypted secrets are configured
- Let you choose a provider when none is passed
- Use an interactive arrow-key TUI for provider/auth/model selection when a TTY is available
- Run provider OAuth flow by default when supported, or prompt for an API key/token otherwise
- Capture localhost OAuth callbacks automatically when the provider redirect URI is local
- Store credentials in encrypted secrets with provider metadata
- Write the relevant
providers.<provider>config keys - Support
--set-default,--model, and--config-path
Antigravity OAuth client credentials resolve from Lemon secrets first:
google_antigravity_oauth_client_idgoogle_antigravity_oauth_client_secret
Environment variables are supported as fallback:
GOOGLE_ANTIGRAVITY_OAUTH_CLIENT_IDGOOGLE_ANTIGRAVITY_OAUTH_CLIENT_SECRET
Common non-interactive usage:
# Google Antigravity
mix lemon.onboard.antigravity --token <token> --set-default --model gemini-3-pro-high
# OpenAI Codex
mix lemon.onboard.codex --token <token> --set-default --model gpt-5.2
# GitHub Copilot (enterprise + optional model enablement toggle)
mix lemon.onboard.copilot --enterprise-domain company.ghe.com
mix lemon.onboard.copilot --skip-enable-models
mix lemon.onboard.copilot --token <token> --set-default --model gpt-5
mix lemon.onboard.copilot --token <token> --config-path /path/to/config.tomlAnthropic provider auth is API key based. Store your key in secrets:
mix lemon.secrets.set llm_anthropic_api_key <token>Lemon includes web tools under runtime.tools.web. For full setup and troubleshooting, see:
[runtime.tools.web.search]
enabled = true
provider = "brave" # "brave" | "perplexity"
max_results = 5
timeout_seconds = 30
cache_ttl_minutes = 15
[runtime.tools.web.search.failover]
enabled = true
provider = "perplexity"
[runtime.tools.web.search.perplexity]
# Optional if PERPLEXITY_API_KEY / OPENROUTER_API_KEY is set.
api_key = "pplx-..."
base_url = "https://api.perplexity.ai"
model = "perplexity/sonar-pro"
[runtime.tools.web.fetch]
enabled = true
max_chars = 50000
timeout_seconds = 30
cache_ttl_minutes = 15
max_redirects = 3
readability = true
allow_private_network = false
allowed_hostnames = []
[runtime.tools.web.fetch.firecrawl]
# Optional if FIRECRAWL_API_KEY is set.
enabled = true
api_key = "fc-..."
base_url = "https://api.firecrawl.dev"
only_main_content = true
max_age_ms = 172800000
timeout_seconds = 60
[runtime.tools.web.cache]
persistent = true
path = "~/.lemon/cache/web_tools"
max_entries = 100WASM tools are disabled by default and run in a per-session Rust sidecar.
See docs/tools/wasm.md for runtime behavior and troubleshooting.
[runtime.tools.wasm]
enabled = false
auto_build = true
runtime_path = ""
tool_paths = []
default_memory_limit = 10485760
default_timeout_ms = 60000
default_fuel_limit = 10000000
cache_compiled = true
cache_dir = ""
max_tool_invoke_depth = 4providers.<name>: API keys and base URLs per provider.defaults: global default model/provider/thinking/engine.runtime: runtime behavior and tool settings.runtime.tools.web:websearch/webfetchproviders, guardrails, cache, and Firecrawl fallback.runtime.tools.wasm: WASM sidecar runtime controls and discovery paths.profiles.<agent_id>: assistant profiles (identity + defaults) used by gateway/control-plane.runtime.compaction: context compaction settings.runtime.retry: retry settings.runtime.cli: CLI runner settings (codex,claude,kimi,opencode,pi).tui: terminal UI settings.gateway: Lemon gateway settings, includingqueue,telegram,discord,sms,voice,xmtp,projects,bindings, andengines.logging: optional file logging configuration.
When LemonGateway handles a Telegram message, it can optionally map that chat (or topic/thread) to a named project. A project is just a working directory root (repo path) plus optional defaults.
Why it matters:
- The gateway will run engines with
cwdset to the project root (so file edits/commands happen in the right repo). - The gateway will load per-project config from
<project_root>/.lemon/config.toml(which can override agent profiles, models, tool policy, etc. compared to your global~/.lemon/config.toml). - If a chat has no bound project, gateway falls back to
gateway.default_cwd(or~/by default).
Define projects under [gateway.projects.<project_id>]:
[gateway.projects.myrepo]
root = "/path/to/myrepo"
# Optional: project-level default engine if a binding doesn't set one.
default_engine = "lemon"Bindings connect an incoming chat scope to a project/agent/defaults:
[[gateway.bindings]]
transport = "telegram"
chat_id = 123456789
# Optional: bind this chat to a project (must match the `[gateway.projects.<id>]` key)
project = "myrepo"
# Optional: choose which agent profile to use (defaults to "default")
agent_id = "default"
# Optional: per-chat default engine/queue overrides
default_engine = "claude"
queue_mode = "steer"Notes:
- If you omit
project, LemonGateway will run withcwdset togateway.default_cwdwhen configured, otherwise~/. - You can also bind at the topic/thread level by setting
topic_idin the binding (takes precedence over the chat-level binding when a matching topic exists). topic_idcorresponds to Telegram'smessage_thread_id(only present in forum topics).- LemonGateway loads
gateway.*config on startup; after changinggateway.projectsorgateway.bindings, restart the gateway process.
Optional fallback cwd:
[gateway]
default_cwd = "~/"Tip:
- In Telegram, you can set or inspect the current chat/topic working directory at runtime with
/cwd [project_id|path|clear]. /new <project_id|path>still works, and setting/cwdmakes future/newsessions in that chat/topic use the same directory./newconfirmation replies include model, provider, cwd, and session context details.- If you pass a path, Lemon will register it as a project named after the last path segment (e.g.
~/dev/lemon=> projectlemon).
Lemon can run as an XMTP bot through the absorbed LemonChannels XMTP adapter inside :lemon_gateway.
./bin/lemon-xmtp-bootstrapThis installs bridge dependencies in apps/lemon_gateway/priv/node_modules (where xmtp_bridge.mjs resolves imports).
[gateway]
enable_xmtp = true
[gateway.xmtp]
env = "production" # production | dev | local
wallet_address = "${XMTP_WALLET_ADDRESS}"
wallet_key_secret = "XMTP_WALLET_KEY" # secret ref; env fallback works if XMTP_WALLET_KEY is set
db_path = "~/.lemon/xmtp-db"
poll_interval_ms = 1500
connect_timeout_ms = 15000
require_live = true # production default: do not allow mock fallback
mock_mode = false # set true only for local bridge testing
# Optional:
# api_url = "https://api.xmtp.network"
# inbox_id = "..."
# sdk_module = "@xmtp/node-sdk"Notes:
- When
enable_xmtp = true, Lemon auto-registers and starts the XMTP channel adapter. require_live = truekeeps health/readiness red unless the bridge is truly live (not mock mode).wallet_key_secretis the canonical credential field. It can point to a Lemon secret name or to an env var name when using secret resolution with env fallback.- Non-text XMTP messages currently receive a text-only fallback response.
Voice transport is configured under [gateway.voice].
[gateway.voice]
enabled = true
websocket_port = 4047
public_url = "https://example.com"
twilio_account_sid_secret = "twilio_account_sid"
twilio_auth_token_secret = "twilio_auth_token"
twilio_phone_number = "+1234567890"
deepgram_api_key_secret = "deepgram_api_key"
elevenlabs_api_key_secret = "elevenlabs_api_key"
elevenlabs_voice_id = "21m00Tcm4TlvDq8ikWAM"
elevenlabs_output_format = "ulaw_8000"
llm_model = "gpt-4o-mini"
system_prompt = "You are a helpful phone assistant."
max_call_duration_seconds = 600
silence_timeout_ms = 5000Canonical voice settings are loaded from gateway.voice. Legacy :lemon_gateway app env
fallbacks remain only as temporary compatibility shims and should not be used for new setup.
If enabled, Telegram voice notes are transcribed and the transcript is routed as a normal text message.
[gateway.telegram]
voice_transcription = true
voice_transcription_model = "gpt-4o-mini-transcribe" # optional
voice_max_bytes = 10485760 # optional (default: 10MB)
# Optional OpenAI-compatible overrides (defaults to providers.openai)
voice_transcription_base_url = "https://api.openai.com/v1"
voice_transcription_api_key = "sk-..."Enable /file put and /file get (and optional auto-save for plain document uploads).
[gateway.telegram.files]
enabled = true
auto_put = true
auto_put_mode = "upload" # "upload" | "prompt"
auto_send_generated_images = true # optional: send generated images automatically after a run
auto_send_generated_max_files = 3 # optional: max images auto-sent per run (default: 3)
uploads_dir = "incoming"
media_group_debounce_ms = 1000 # optional (default: 1000ms)
# Optional safety rails
allowed_user_ids = [123456789] # if empty, group uploads require admin
deny_globs = [".git/**", ".env", ".envrc", "**/*.pem", "**/.ssh/**"]
max_upload_bytes = 20971520 # optional (default: 20MB)
max_download_bytes = 52428800 # optional (default: 50MB)
outbound_send_delay_ms = 1000 # optional: delay between auto-sent files/batches to reduce 429sCommands:
/file put [--force] <path>: upload a Telegram document into the active working root./file get <path>: fetch a file (or zip a directory) from the active working root back into Telegram.
If no project is bound for the chat, the active root falls back to gateway.default_cwd (or ~/).
When auto_send_generated_images = true, Lemon tracks image files created/changed during the run and sends up to
auto_send_generated_max_files files back to Telegram automatically at completion (using the same max_download_bytes
limit as /file get).
When a Telegram run approaches the model context limit, Lemon can proactively mark the session for compaction so the next user message is automatically rewritten with a compact transcript and sent as a fresh session.
[gateway.telegram.compaction]
enabled = true
context_window_tokens = 400000 # optional override; if unset Lemon infers from model/engine
reserve_tokens = 16384 # optional safety margin before limit
trigger_ratio = 0.9 # optional; 0.9 means trigger at 90% of context windowIn Telegram group chats, you can gate runs so Lemon only triggers when explicitly invoked:
/trigger: show current trigger mode./trigger mentions: only run on@botname, reply-to-bot, or slash commands./trigger all: run on all messages./trigger clear: clear a topic override (forum topics only)./cwd [project_id|path|clear]: show, set, or clear the chat/topic working directory override used by future/newsessions.
Forum topic management:
/topic <name>: create a new topic in the current Telegram forum supergroup.