Most TensorZero deployments will not require Valkey or Redis.
TensorZero can use a Redis-compatible data store like Valkey as a high-performance backend for its rate limiting functionality.
We recommend Valkey over Postgres if you’re handling 100+ QPS or have extreme latency requirements.
TensorZero’s rate limiting implementation can achieve sub-millisecond P99 latency at 10k+ QPS using Valkey.
Deploy
You can self-host Valkey or use a managed Redis-compatible service (e.g. AWS ElastiCache, GCP Memorystore).
Add Valkey to your docker-compose.yml:services:
valkey:
image: valkey/valkey:8
ports:
- "6379:6379"
volumes:
- valkey-data:/data
volumes:
valkey-data:
Run Valkey with Docker:docker run -d --name valkey -p 6379:6379 valkey/valkey:8
To configure TensorZero to use Valkey, set the TENSORZERO_VALKEY_URL environment variable with your Valkey connection details.
TENSORZERO_VALKEY_URL="redis://[hostname]:[port]"
# Example:
TENSORZERO_VALKEY_URL="redis://localhost:6379"
TensorZero automatically loads the required Lua functions into Valkey on startup.
No manual setup is required.
If both TENSORZERO_VALKEY_URL and TENSORZERO_POSTGRES_URL are set, the gateway uses Valkey for rate limiting.
Best Practices
Durability
A critical failure of Valkey (e.g. server crash) may result in loss of rate limiting data since the last backup.
This is generally tolerable if your rate limiting windows are short (e.g. minutes), but if you require precise limits or longer time windows, we recommend configuring recurring RDB (point-in-time) snapshots for improved durability.