Supported Providers
AI SpendOps supports 14 AI providers. The proxy routes based on the first path segment after /v1/.
Provider reference
| Provider | Route | Upstream | Streaming Usage |
|---|---|---|---|
| OpenAI | /v1/openai/* | api.openai.com | Auto-injected |
| Anthropic | /v1/anthropic/* | api.anthropic.com | Native |
| Google AI Studio | /v1/google/* | generativelanguage.googleapis.com | Auto-injected |
| OpenRouter | /v1/openrouter/* | openrouter.ai/api | Always included |
| xAI | /v1/xai/* | api.x.ai | Auto-injected |
| Groq | /v1/groq/* | api.groq.com/openai | Standard |
| DeepInfra | /v1/deepinfra/* | api.deepinfra.com/v1/openai | Standard |
| DeepSeek | /v1/deepseek/* | api.deepseek.com | Standard |
| Mistral | /v1/mistral/* | api.mistral.ai | Standard |
| Perplexity | /v1/perplexity/* | api.perplexity.ai | Standard |
| Fireworks | /v1/fireworks/* | api.fireworks.ai/inference | Standard |
| Cerebras | /v1/cerebras/* | api.cerebras.ai | Standard |
| Novita | /v1/novita/* | api.novita.ai/v3/openai | Standard |
| Nebius | /v1/nebius/* | api.tokenfactory.nebius.com | Standard |
Streaming usage methods
- Auto-injected: The proxy adds
stream_options: { include_usage: true }if not present - Native: Provider always includes usage in streaming responses
- Always included: Provider includes usage data by default
- Standard: Usage from non-streaming responses; streaming usage via final chunk
Any endpoint works
The proxy does not limit you to specific endpoints. Any endpoint your provider supports — chat completions, embeddings, image generation, audio, fine-tuning — works through the proxy.