Skip to main content

OpenAI

Route your OpenAI API calls through AI SpendOps for automatic usage tracking and cost attribution.

Configuration

SettingValue
Route/v1/openai/*
Upstreamhttps://api.openai.com
Auth headerAuthorization: Bearer sk-...
Streaming usageAuto-injected (stream_options.include_usage)

SDK base URL

https://proxy.aispendops.com/v1/openai/v1

Example

curl https://proxy.aispendops.com/v1/openai/v1/chat/completions \
-H "Authorization: Bearer sk-your-openai-key" \
-H "X-ASO-API-Key: aso_k_yourkey.secret" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"Hello"}]}'

Python SDK

from openai import OpenAI

client = OpenAI(
api_key="sk-your-openai-key",
base_url="https://proxy.aispendops.com/v1/openai/v1",
default_headers={"X-ASO-API-Key": "aso_k_yourkey.secret"},
)

response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)

Usage fields

FieldDescription
prompt_tokensInput tokens
completion_tokensOutput tokens
total_tokensSum of prompt + completion tokens
prompt_tokens_details.cached_tokensTokens served from cache
prompt_tokens_details.audio_tokensAudio input tokens
prompt_tokens_details.image_tokensImage input tokens
completion_tokens_details.reasoning_tokensTokens used for reasoning (o-series models)
completion_tokens_details.audio_tokensAudio output tokens

Notes

  • Most popular provider. Supports chat completions, embeddings, images, audio, and fine-tuning.
  • The proxy automatically injects stream_options: { include_usage: true } for streaming requests so usage is captured without any client-side changes.
  • Any OpenAI endpoint works through the proxy -- just prefix the path with /v1/openai.