- A model specifies a particular LLM (e.g. GPT-5 or your fine-tuned Llama 3).
- A model provider specifies how you can access a given model (e.g. GPT-5 is available through both OpenAI and Azure).
Configure a model & model provider
A model has an arbitrary name and a list of providers. Each provider has an arbitrary name, a type, and other fields that depend on the provider type. The skeleton of a model and provider configuration looks like this:tensorzero.toml
Example: GPT-5 + OpenAI
Let’s configure a provider for GPT-5 from OpenAI. We’ll call our modelmy_gpt_5 and our provider my_openai_provider with type openai.
The only required field for the openai provider is model_name.
tensorzero.toml
my_gpt_5 when calling the inference endpoint or when configuring functions and variants.
Configure multiple providers for fallback & routing
You can configure multiple providers for the same model to enable automatic fallbacks. The gateway will try each provider in therouting field in order until one succeeds.
This helps mitigate provider downtime and rate limiting.
For example, you might configure both OpenAI and Azure as providers for GPT-5:
tensorzero.toml
Use short-hand model names
If you don’t need advanced functionality like fallback routing or custom credentials, you can use shorthand model names directly in your variant configuration. TensorZero supports shorthand names like:openai::gpt-5anthropic::claude-3-5-haiku-20241022google::gemini-2.0-flash-exp
model field without defining a separate model configuration block.
tensorzero.toml