LiteLLM Integration
LiteLLM is a popular library for calling 100+ LLM APIs using a unified interface. It works with AI SpendOps by setting the API base URL.
Python
import litellm
response = litellm.completion(
model="openai/gpt-4.1",
messages=[{"role": "user", "content": "Hello"}],
api_key="sk-your-openai-key",
api_base="https://proxy.aispendops.com/v1/openai/v1",
extra_headers={
"X-ASO-API-Key": "aso_k_yourkey.secret",
"X-ASO-Dims": "team=ml,app=chatbot",
},
)
print(response.choices[0].message.content)
Multiple providers
# OpenAI through proxy
response = litellm.completion(
model="openai/gpt-4.1",
messages=messages,
api_key="sk-openai-key",
api_base="https://proxy.aispendops.com/v1/openai/v1",
extra_headers={"X-ASO-API-Key": "aso_k_yourkey.secret"},
)
# Anthropic through proxy
response = litellm.completion(
model="anthropic/claude-sonnet-4-5-20250929",
messages=messages,
api_key="sk-ant-key",
api_base="https://proxy.aispendops.com/v1/anthropic",
extra_headers={"X-ASO-API-Key": "aso_k_yourkey.secret"},
)
Environment variables
You can also configure LiteLLM via environment variables:
export OPENAI_API_BASE="https://proxy.aispendops.com/v1/openai/v1"
export OPENAI_API_KEY="sk-your-openai-key"
Then pass ASO headers per-request:
response = litellm.completion(
model="openai/gpt-4.1",
messages=messages,
extra_headers={"X-ASO-API-Key": "aso_k_yourkey.secret"},
)