How to Define Dimensions for AI Cost Attribution: A Guide for CFOs
Most CFOs already know their AI spend is growing. The harder question is: what is it growing on, and who's responsible for it?
The answer lives in dimensions — the categories you use to slice, attribute, and report on every pound of AI API spend. Get your dimensions right and you unlock forecasting, chargeback, and accountability. Get them wrong and you're stuck with a single line item on the cloud bill that tells you almost nothing.
This guide walks through the dimensions that matter most, how to think about choosing them, and the mistakes to avoid.
What Is a Dimension?
A dimension is simply a label you attach to AI spend so you can group and filter costs in ways that are meaningful to your business. Think of them as the columns in a pivot table. Each AI API call can be tagged with multiple dimensions, giving you the ability to answer questions like "how much did the product team spend on GPT-4 in staging last month?"
The power of dimensions is that finance defines them, not engineering. You choose the taxonomy that matches how your organisation already budgets, reports, and allocates cost.
The Core Dimensions Every CFO Should Consider
1. Team or Department
This is the most fundamental cut. Which team triggered the spend? Engineering, product, data science, marketing, customer support — each may be consuming AI APIs for entirely different reasons. Without this dimension, you cannot produce a chargeback report or hold budget owners accountable.
Ask yourself: If spend doubled overnight, would I know which team to call?
2. Product or Feature
AI spend rarely maps neatly to a single product. A single application might use language models for search, summarisation, and content generation — each with very different cost profiles. Tagging by feature lets you understand unit economics at a granular level. It also surfaces which features are worth the investment and which are burning budget without proportionate value.
3. Environment
Development, staging, and production environments can have wildly different spend patterns. It's not uncommon for dev and test environments to account for 30–40% of total AI API costs, often unintentionally. Splitting by environment immediately highlights waste and gives engineering a clear target for optimisation without touching production workloads.
4. Cost Centre or Business Unit
This is where AI spend connects to your existing financial reporting structure. Mapping API costs to the cost centres you already use means AI spend flows naturally into management accounts, P&L reporting, and board packs without manual reconciliation. If your finance team has to re-categorise AI costs by hand each month, this dimension is missing.
5. Use Case or Purpose
Why is the AI being called? Internal tooling, customer-facing features, research and experimentation, compliance automation — each carries different expectations around ROI and acceptable spend levels. This dimension helps you answer the board question: "What are we spending on AI and is it worth it?"
6. Region or Geography
For multinational organisations, regional attribution matters for transfer pricing, regulatory compliance, and understanding local adoption patterns. It can also reveal latency-driven cost differences if teams in certain regions are defaulting to more expensive model configurations.
How to Choose the Right Dimensions for Your Organisation
Not every dimension above will be relevant to every business, and you may need others that are unique to your sector or operating model. Here are three principles to guide the decision.
Start with how you already report. Your AI spend dimensions should mirror the structure of your existing management accounts. If you report by business unit and product line today, those should be your first two dimensions. The goal is to make AI costs slot into the reporting cadence you already have, not to create a parallel system.
Prioritise accountability over granularity. It's tempting to tag everything, but each dimension adds complexity. Start with three or four that let you answer the most important questions — who is spending, on what, and is it within budget? You can always add dimensions later once the basics are clean.
Make sure dimensions are enforceable. A dimension only works if every API call gets tagged. That means the taxonomy needs to be clear, finite, and ideally enforced at the point of call rather than applied retroactively. If engineers have to remember to tag things manually, your data will have gaps within a week.
Common Mistakes to Avoid
Letting engineering define the taxonomy in isolation. Engineers will naturally organise by technical boundaries like microservice name, repository, or deployment cluster. These are useful for debugging but meaningless in a board pack. Finance needs to own or co-own the dimension design from day one.
Treating AI spend as a single cloud line item. Bundling AI costs into "compute" or "third-party software" hides the fastest-growing discretionary spend category in most technology budgets. AI spend deserves its own reporting structure with its own dimensions.
Waiting until spend is a problem. The best time to define dimensions is before AI costs become material. Retrofitting attribution onto an estate of hundreds of thousands of daily API calls is significantly harder than setting it up when volumes are still manageable.
What Good Looks Like
When dimensions are defined well, a CFO should be able to open a single dashboard and answer questions like:
- How much did we spend on AI last month, broken down by team and product?
- Which features have the highest AI cost per user or per transaction?
- Are dev and staging environments consuming budget that should be capped?
- How does our spend forecast compare to the budget we set at the start of the quarter?
- Can I produce a chargeback report for each business unit without asking engineering for help?
If you can answer those questions confidently, your dimensions are working. If you can't, it's time to revisit the taxonomy.
Getting Started
Defining dimensions doesn't require a six-month project. The practical steps are straightforward:
- Audit your current AI providers and how calls are made
- Agree on three to five dimensions with finance and engineering jointly
- Route your API traffic through a platform that enforces tagging and produces reporting automatically
The goal isn't perfection on day one. It's moving from a single opaque line item to a structured, attributable, finance-ready view of AI spend — so that when the board asks the question, you already have the answer.
The number of dimensions available depends on your subscription plan. See Pricing for details. For technical implementation, see Dimensions.