OpenAI
Route your OpenAI API calls through AI SpendOps for automatic usage tracking and cost attribution.
Configuration
| Setting | Value |
|---|---|
| Route | /v1/openai/* |
| Upstream | https://api.openai.com |
| Auth header | Authorization: Bearer sk-... |
| Streaming usage | Auto-injected (stream_options.include_usage) |
SDK base URL
https://proxy.aispendops.com/v1/openai/v1
Example
curl https://proxy.aispendops.com/v1/openai/v1/chat/completions \
-H "Authorization: Bearer sk-your-openai-key" \
-H "X-ASO-API-Key: aso_k_yourkey.secret" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"Hello"}]}'
Python SDK
from openai import OpenAI
client = OpenAI(
api_key="sk-your-openai-key",
base_url="https://proxy.aispendops.com/v1/openai/v1",
default_headers={"X-ASO-API-Key": "aso_k_yourkey.secret"},
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)
Usage fields
| Field | Description |
|---|---|
prompt_tokens | Input tokens |
completion_tokens | Output tokens |
total_tokens | Sum of prompt + completion tokens |
prompt_tokens_details.cached_tokens | Tokens served from cache |
prompt_tokens_details.audio_tokens | Audio input tokens |
prompt_tokens_details.image_tokens | Image input tokens |
completion_tokens_details.reasoning_tokens | Tokens used for reasoning (o-series models) |
completion_tokens_details.audio_tokens | Audio output tokens |
Notes
- Most popular provider. Supports chat completions, embeddings, images, audio, and fine-tuning.
- The proxy automatically injects
stream_options: { include_usage: true }for streaming requests so usage is captured without any client-side changes. - Any OpenAI endpoint works through the proxy -- just prefix the path with
/v1/openai.