Anthropic
Route your Anthropic API calls through AI SpendOps for automatic usage tracking and cost attribution.
Configuration
| Setting | Value |
|---|---|
| Route | /v1/anthropic/* |
| Upstream | https://api.anthropic.com |
| Auth header | x-api-key: sk-ant-... |
| Streaming usage | Native (message_start + message_delta events) |
SDK base URL
https://proxy.aispendops.com/v1/anthropic
Example
curl https://proxy.aispendops.com/v1/anthropic/v1/messages \
-H "x-api-key: sk-ant-your-anthropic-key" \
-H "X-ASO-API-Key: aso_k_yourkey.secret" \
-H "anthropic-version: 2023-06-01" \
-H "Content-Type: application/json" \
-d '{"model":"claude-sonnet-4-20250514","max_tokens":1024,"messages":[{"role":"user","content":"Hello"}]}'
Python SDK
import anthropic
client = anthropic.Anthropic(
api_key="sk-ant-your-anthropic-key",
base_url="https://proxy.aispendops.com/v1/anthropic",
default_headers={"X-ASO-API-Key": "aso_k_yourkey.secret"},
)
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}],
)
print(message.content[0].text)
Usage fields
| Field | Description |
|---|---|
input_tokens | Input tokens (includes cache tokens) |
output_tokens | Output tokens |
cache_read_input_tokens | Tokens served from prompt cache |
cache_creation_input_tokens | Tokens written to prompt cache |
server_tool_use.web_search_requests | Number of web search tool invocations |
Notes
- Use the native
/v1/messagesendpoint for accurate streaming usage. The OpenAI-compatible endpoint does not return streaming usage data. - Cache tokens are included in the
input_tokenstotal. The usage consumer subtracts cache tokens to avoid double-counting when calculating costs. - Anthropic uses
x-api-keyinstead ofAuthorization: Bearerfor authentication.