Skip to main content
0x0 bundles integrations for 20+ LLM providers. Each provider can be configured globally or per project with API keys, model filtering, and custom endpoints.

Bundled providers

ProviderIDNotes
AnthropicanthropicClaude models
OpenAIopenaiGPT and o-series models
Google AIgoogleGemini models
AWS Bedrockamazon-bedrockRequires AWS credentials
Azure OpenAIazureRequires Azure deployment
xAIxaiGrok models
GroqgroqFast inference
MistralmistralMistral and Codestral
CoherecohereCommand models
DeepInfradeepinfraMulti-model hosting
CerebrascerebrasFast inference
OpenRouteropenrouterMulti-provider routing
PerplexityperplexitySearch-augmented models
TogetherAItogetheraiOpen-source model hosting
VercelvercelVercel AI Gateway
GitLabgitlabGitLab Duo
GitHub CopilotcopilotGitHub Copilot models
AntigravityantigravityAntigravity managed models
Google Vertexgoogle-vertexGoogle Cloud Vertex AI

Provider config

Configure providers in config.yaml under the provider key:
provider:
  openai:
    options:
      apiKey: '{env:OPENAI_API_KEY}'
      baseURL: https://api.openai.com/v1
      timeout: 300000
    models:
      gpt-4.1:
        variants:
          low: {}
          medium: {}
          high: {}

Provider options

FieldTypeDefaultDescription
apiKeystringAPI key (supports {env:VAR} interpolation)
baseURLstringBase URL override for API requests
enterpriseUrlstringGitHub Enterprise URL (Copilot provider only)
setCacheKeybooleanEnable prompt cache key
timeoutnumber | false300000Request timeout in ms, or false to disable

Model configuration

  • models: per-model configuration including variants (reasoning effort levels, can be disabled with disabled: true)

Provider config files

Provider-specific config can be stored in separate files:
  • Global: ~/.config/0x0/providers/<provider-id>.yaml
  • Project: .0x0/providers/<provider-id>.yaml
The filename (without .yaml) becomes the provider ID. These files use the same schema as the provider.<id> config section and are merged at the appropriate precedence level.

Setting the default model

model: anthropic/claude-sonnet-4
The format is provider-id/model-id. This can be overridden per-agent or via the --model CLI flag. A separate small_model field sets the model used for lightweight tasks like title generation:
small_model: openai/gpt-4.1-mini

Authentication

Manage provider credentials:
# Interactive login (provider selection, OAuth or API key)
0x0 auth login

# Log in to a specific provider URL
0x0 auth login https://api.openai.com

# List configured providers and env var status
0x0 auth list

# Log out
0x0 auth logout
Credentials are stored in ~/.config/0x0/auth.json.

Listing models

# List all available models
0x0 models

# Filter by provider
0x0 models anthropic

# Verbose output with costs and metadata
0x0 models --verbose

# Refresh the model cache
0x0 models --refresh
Model definitions are fetched from models.dev and cached locally. Use models_url or models_path in config to override the source.

See also