Skip to main content

Anthropic

Route your Anthropic API calls through AI SpendOps for automatic usage tracking and cost attribution.

Configuration

SettingValue
Route/v1/anthropic/*
Upstreamhttps://api.anthropic.com
Auth headerx-api-key: sk-ant-...
Streaming usageNative (message_start + message_delta events)

SDK base URL

https://proxy.aispendops.com/v1/anthropic

Example

curl https://proxy.aispendops.com/v1/anthropic/v1/messages \
-H "x-api-key: sk-ant-your-anthropic-key" \
-H "X-ASO-API-Key: aso_k_yourkey.secret" \
-H "anthropic-version: 2023-06-01" \
-H "Content-Type: application/json" \
-d '{"model":"claude-sonnet-4-20250514","max_tokens":1024,"messages":[{"role":"user","content":"Hello"}]}'

Python SDK

import anthropic

client = anthropic.Anthropic(
api_key="sk-ant-your-anthropic-key",
base_url="https://proxy.aispendops.com/v1/anthropic",
default_headers={"X-ASO-API-Key": "aso_k_yourkey.secret"},
)

message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}],
)
print(message.content[0].text)

Usage fields

FieldDescription
input_tokensInput tokens (includes cache tokens)
output_tokensOutput tokens
cache_read_input_tokensTokens served from prompt cache
cache_creation_input_tokensTokens written to prompt cache
server_tool_use.web_search_requestsNumber of web search tool invocations

Notes

  • Use the native /v1/messages endpoint for accurate streaming usage. The OpenAI-compatible endpoint does not return streaming usage data.
  • Cache tokens are included in the input_tokens total. The usage consumer subtracts cache tokens to avoid double-counting when calculating costs.
  • Anthropic uses x-api-key instead of Authorization: Bearer for authentication.