Skip to main content

Supported Providers

AI SpendOps supports 14 AI providers. The proxy routes based on the first path segment after /v1/.

Provider reference

ProviderRouteUpstreamStreaming Usage
OpenAI/v1/openai/*api.openai.comAuto-injected
Anthropic/v1/anthropic/*api.anthropic.comNative
Google AI Studio/v1/google/*generativelanguage.googleapis.comAuto-injected
OpenRouter/v1/openrouter/*openrouter.ai/apiAlways included
xAI/v1/xai/*api.x.aiAuto-injected
Groq/v1/groq/*api.groq.com/openaiStandard
DeepInfra/v1/deepinfra/*api.deepinfra.com/v1/openaiStandard
DeepSeek/v1/deepseek/*api.deepseek.comStandard
Mistral/v1/mistral/*api.mistral.aiStandard
Perplexity/v1/perplexity/*api.perplexity.aiStandard
Fireworks/v1/fireworks/*api.fireworks.ai/inferenceStandard
Cerebras/v1/cerebras/*api.cerebras.aiStandard
Novita/v1/novita/*api.novita.ai/v3/openaiStandard
Nebius/v1/nebius/*api.tokenfactory.nebius.comStandard

Streaming usage methods

  • Auto-injected: The proxy adds stream_options: { include_usage: true } if not present
  • Native: Provider always includes usage in streaming responses
  • Always included: Provider includes usage data by default
  • Standard: Usage from non-streaming responses; streaming usage via final chunk

Any endpoint works

The proxy does not limit you to specific endpoints. Any endpoint your provider supports — chat completions, embeddings, image generation, audio, fine-tuning — works through the proxy.