Skip to main content

Supported Providers

AI SpendOps supports 15 AI providers. The proxy routes based on the first path segment after /v1/.

Provider reference

ProviderRouteUpstreamStreaming Usage
OpenAI/v1/openai/*api.openai.comAuto-injected
Anthropic/v1/anthropic/*api.anthropic.comNative
Google AI Studio/v1/google-ai-studio/*generativelanguage.googleapis.comAuto-injected
OpenRouter/v1/openrouter/*openrouter.ai/apiAlways included
xAI/v1/xai/*api.x.aiAuto-injected
Groq/v1/groq/*api.groq.com/openaiAuto-injected
DeepInfra/v1/deepinfra/*api.deepinfra.com/v1/openaiStandard (no injection)
DeepSeek/v1/deepseek/*api.deepseek.comAuto-injected
Mistral/v1/mistral/*api.mistral.aiStandard (no injection)
Perplexity/v1/perplexity/*api.perplexity.aiAuto-injected
Fireworks/v1/fireworks/*api.fireworks.ai/inferenceStandard (no injection)
Cerebras/v1/cerebras/*api.cerebras.aiAuto-injected
Novita/v1/novita/*api.novita.ai/v3/openaiAuto-injected
Nebius/v1/nebius/*api.tokenfactory.nebius.comAuto-injected
z.ai/v1/zai/*api.z.ai/api/paas/v4Auto-injected

Streaming usage methods

  • Auto-injected: The proxy adds stream_options: { include_usage: true } if not present
  • Native: Provider always includes usage in streaming responses
  • Always included: Provider includes usage data by default
  • Standard (no injection): Provider does not support stream_options; usage from non-streaming responses only

Any endpoint works

The proxy does not limit you to specific endpoints. Any endpoint your provider supports — chat completions, embeddings, image generation, audio, fine-tuning — works through the proxy.