Google AI Studio
Route your Google AI Studio / Gemini API calls through AI SpendOps for automatic usage tracking and cost attribution.
Configuration
| Setting | Value |
|---|---|
| Route | /v1/google/* |
| Upstream | https://generativelanguage.googleapis.com |
| Auth header | Authorization: Bearer ... or x-goog-api-key: ... |
| Streaming usage | Auto-injected (stream_options.include_usage) |
SDK base URL
https://proxy.aispendops.com/v1/google
Example
curl https://proxy.aispendops.com/v1/google/v1beta/openai/chat/completions \
-H "Authorization: Bearer your-google-api-key" \
-H "X-ASO-API-Key: aso_k_yourkey.secret" \
-H "Content-Type: application/json" \
-d '{"model":"gemini-2.0-flash","messages":[{"role":"user","content":"Hello"}]}'
Python SDK
from openai import OpenAI
client = OpenAI(
api_key="your-google-api-key",
base_url="https://proxy.aispendops.com/v1/google/v1beta/openai",
default_headers={"X-ASO-API-Key": "aso_k_yourkey.secret"},
)
response = client.chat.completions.create(
model="gemini-2.0-flash",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)
Usage fields
| Field | Description |
|---|---|
promptTokenCount | Input tokens |
candidatesTokenCount | Output tokens |
totalTokenCount | Total tokens |
cachedContentTokenCount | Tokens served from cached content |
thoughtsTokenCount | Tokens used for thinking (via usageMetadata) |
Notes
- Google AI Studio supports both
Authorization: Bearerandx-goog-api-keyfor authentication. - The proxy auto-injects
stream_options: { include_usage: true }for streaming requests via the OpenAI-compatible endpoint. - Usage metadata is returned in a
usageMetadataobject in Google's native response format.