azure provider supports both Azure OpenAI Service and Azure AI Foundry. Both use the same OpenAI-compatible API, so configuration is nearly identical—just use different endpoint URLs.
This guide shows how to set up a minimal deployment to use the TensorZero Gateway with Azure OpenAI Service and Azure AI Foundry.
Azure OpenAI Service
Setup
For this minimal setup, you’ll need just two files in your project directory:Configuration
Create a minimal configuration file that defines a model and a simple chat function:config/tensorzero.toml
endpoint = "env::AZURE_OPENAI_ENDPOINT" to read from the environment variable AZURE_OPENAI_ENDPOINT on startup or endpoint = "dynamic::azure_openai_endpoint" to read from a dynamic credential azure_openai_endpoint on each inference.
Credentials
You must set theAZURE_OPENAI_API_KEY environment variable before running the gateway.
You can customize the credential location by setting the api_key_location to env::YOUR_ENVIRONMENT_VARIABLE or dynamic::ARGUMENT_NAME.
See the Credential Management guide and Configuration Reference for more information.
Deployment (Docker Compose)
Create a minimal Docker Compose configuration:docker-compose.yml
docker compose up.
Inference
Make an inference request to the gateway:Other Features
Generate embeddings
The Azure OpenAI Service model provider supports generating embeddings. You can find a complete code example on GitHub.Azure AI Foundry
Azure AI Foundry provides access to models from multiple providers (Meta Llama, Mistral, xAI Grok, Microsoft Phi, Cohere, and more). See the list of available models. The sameazure provider works with Azure AI Foundry.
The key difference is the endpoint URL.
All other configuration options (credentials, Docker Compose, inference) work the same as Azure OpenAI Service above.