Skip to main content
ForgeCode ranks #1 on TermBench with 81.8% accuracy.Learn more →

Custom Providers

ForgeCode supports OpenRouter, OpenAI, Anthropic, Google Vertex AI, Groq, and Amazon Bedrock out of the box. For everything else — self-hosted models, enterprise API gateways, regional endpoints, or any service that speaks a supported API format — you configure a custom provider in .forge.toml.

Adding a Provider

Each provider gets a [[providers]] entry in your .forge.toml. The only required fields are id and url:

[[providers]]
id = "my-provider"
url = "https://my-llm-gateway.internal/v1/chat/completions"
api_key_vars = "MY_PROVIDER_API_KEY"
response_type = "OpenAI"
auth_methods = ["api_key"]

id is the name you'll use to reference this provider in the rest of your config and with the :provider command inside ForgeCode. url is the full chat completions endpoint. api_key_vars names the environment variable that holds the API key.

Pointing Sessions at a Provider

The [session] block sets the default model and provider for every conversation. Set provider_id to the id you defined above:

[session]
provider_id = "my-provider"
model_id = "meta-llama/Llama-3.3-70B-Instruct"

You can override this per-session from inside ForgeCode using the :provider command, which lets you switch providers interactively without editing any files.

Full Provider Field Reference

FieldRequiredDescription
idYesUnique provider identifier used in model paths (e.g. "my_provider").
urlYesChat completions URL; may contain {{VAR}} placeholders substituted from url_param_vars.
api_key_varsNoName of the environment variable holding the API key for this provider.
auth_methodsNoAuthentication methods; defaults to ["api_key"]. Use ["google_adc"] for Google Application Default Credentials.
custom_headersNoAdditional HTTP headers sent with every request to this provider.
modelsNoURL for fetching the model list; may contain {{VAR}} placeholders.
provider_typeNoProvider category: "llm" (default) or "context_engine" for code indexing and search.
response_typeNoWire protocol: OpenAI, OpenAIResponses, Anthropic, Bedrock, Google, or OpenCode.
url_param_varsNoList of environment variable names whose values are substituted into {{VAR}} placeholders in url and models.

Multiple Custom Providers

You can define as many [[providers]] entries as you need:

[[providers]]
id = "local"
url = "http://localhost:11434/v1/chat/completions"
models = "http://localhost:11434/v1/models"
response_type = "OpenAI"
auth_methods = ["api_key"]

[[providers]]
id = "staging-gateway"
url = "https://staging-llm.internal/v1/chat/completions"
api_key_vars = "STAGING_LLM_KEY"
response_type = "OpenAI"
auth_methods = ["api_key"]

Switch between them with :provider at any time, or point specific operations (session, commit, suggest) at different entries.

Overriding a Built-In Provider

If your id matches a built-in provider (e.g. "openai", "anthropic"), the entry overrides that provider's fields rather than creating a new one. This lets you swap out the endpoint for a built-in provider without fully replacing it:

[[providers]]
id = "openai"
url = "https://openai-proxy.corp.internal/v1/chat/completions"
api_key_vars = "CORP_OPENAI_KEY"
response_type = "OpenAI"
auth_methods = ["api_key"]

Entries with a new id are appended and become available for model selection alongside the built-ins.

Environment Variables

Both api_key_vars and url_param_vars reference environment variable names — ForgeCode reads the values from your environment at runtime. You can set them in your shell profile or in a ~/.env file, which ForgeCode loads automatically on every run:

# ~/.env
OPENAI_API_KEY=sk-...
OPENAI_URL=https://my-llm-gateway.internal/v1

URL Template Variables

Both url and models support {{VAR}} placeholders. Declare the variable names to substitute in url_param_vars as a list of environment variable names:

[[providers]]
id = "openai_compatible"
api_key_vars = "OPENAI_API_KEY"
url_param_vars = ["OPENAI_URL"]
response_type = "OpenAI"
url = "{{OPENAI_URL}}/chat/completions"
models = "{{OPENAI_URL}}/models"
auth_methods = ["api_key"]

At runtime ForgeCode reads the value of each variable in url_param_vars and substitutes it into the matching {{VAR}} placeholder in url and models. If a provider has no dynamic URL segments, pass an empty list: url_param_vars = [].

Custom Headers

To send additional headers with every request — for example, to pass a gateway token or routing key — use a [providers.custom_headers] table directly after the provider entry:

[[providers]]
id = "kimi_coding"
api_key_vars = "KIMI_API_KEY"
url_param_vars = []
response_type = "OpenAI"
url = "https://api.kimi.com/coding/v1/chat/completions"
models = "https://api.kimi.com/coding/v1/models"
auth_methods = ["api_key"]

[providers.custom_headers]
User-Agent = "KimiCLI/1.0.0"

Google Application Default Credentials

For providers that use Google ADC instead of an API key, set auth_methods to ["google_adc"]:

[[providers]]
id = "vertex-custom"
url = "https://us-central1-aiplatform.googleapis.com/v1/projects/my-project/locations/us-central1/endpoints/openapi/chat/completions"
response_type = "Google"
auth_methods = ["google_adc"]

Provider with Custom Certificate Authority

If your endpoint sits behind a corporate proxy or uses a private CA, configure ForgeCode to trust it via the [http] section of .forge.toml:

[[providers]]
id = "enterprise-gateway"
url = "https://llm-gateway.corp.internal/v1/chat/completions"
api_key_vars = "CORP_LLM_KEY"
response_type = "OpenAI"
auth_methods = ["api_key"]

[http]
root_cert_paths = ["/etc/ssl/certs/corp-ca.pem"]

See Proxy Configuration for the full certificate and proxy setup.

Verifying the Configuration

Open the config file directly from any ForgeCode session:

:config-edit

Then switch to your provider with the :provider command to confirm it loads and responds. If ForgeCode can't reach the endpoint, it will surface a connection error — check that url is reachable and the endpoint is accessible.

The full list of configuration options for .forge.toml is documented in .forge.toml.