- Call any LLM with the same API. TensorZero unifies every major LLM API (e.g. OpenAI) and inference server (e.g. Ollama).
- Get started with a few lines of code. Later, you can optionally add observability, automatic fallbacks, A/B testing, and much more.
- Use any programming language. You can use TensorZero with its Python SDK, any OpenAI SDK (Python, Node, Go, etc.), or its HTTP API.
- Python
- Python (OpenAI SDK)
- Node (OpenAI SDK)
- HTTP
The TensorZero Python SDK provides a unified API for calling any LLM.
1
Set up the credentials for your LLM provider
For example, if you’re using OpenAI, you can set the
OPENAI_API_KEY environment variable with your API key.2
Install the TensorZero Python SDK
You can install the TensorZero SDK with a Python package manager like
pip.3
Initialize the TensorZero Gateway
Let’s initialize the TensorZero Gateway.
For simplicity, we’ll use an embedded gateway without observability or custom configuration.
4
Call the LLM
Sample Response
Sample Response