AI Product Operating Cost Drivers: Demystifying the Path to Strategic Value Realisation – Special Report

Organisations must adopt performance-based contracts for AI to ensure investments deliver tangible value and costs align with business objectives. This approach links payments to measurable outcomes, shifting from traditional input-based pricing.

Conclusion

By adopting a performance-based contracting framework, buyers can move beyond traditional input-based price drivers for artificial intelligence (AI) services and instead pay for outcomes that evidence demonstrable strategic value realisation (SVR)1 outcomes.

This strategic approach helps buyers ensure that AI investments and ongoing operations costs are predictable and contribute directly to the organisation’s objectives and key result areas (OKRs). The approach supports mutually beneficial partnerships with AI technology suppliers, striking a balance between risks and rewards. This requires a commitment to defining clear metrics, transparent measurement, and robust contract governance to strike a balance between value, risk, and predictability.

As organisations increasingly adopt AI and prepare for other disruptive technologies (e. g., ML, blockchain, and in the future, quantum) across front, middle, and back-office functions, it’s fundamentally important to ensure that investments deliver tangible business value and that costs are effectively managed on a sustainable basis.

This paper outlines a feasible framework for performance-based contracting, designed to align supplier incentives with the buyer organisation’s OKRs, that underpins scalable, equitable pricing in evergreen and flexible contract models.

Observations

To date, control mechanisms for AI, including pricing models, have fallen short in the fast-paced and dynamic landscape of disruptive technologies, where usage patterns can be unpredictable and SVR may not be directly attributable to input costs.

a) Misaligned Incentives Proliferate in AI Pricing

Traditional pricing models, such as fixed subscriptions or per-seat fees, and newly emerging throughput models wrongly touted as outcome pricing or value-based billing, likely lead to overpaying for unused capacity, cross-subsidisation of other users, and/or misaligned incentives where suppliers are (over)paid for outputs e. g., API calls, hours worked, rather than actual business outcomes that are of value to the buyer.

Technology service providers’ cost-volume-profit (C–V-P) algorithms, driven by factors such as facilities and staff, are complex and don’t scale linearly. This is especially true with large AI models, often resulting in mismatched pricing, value leakage, and unclear buyer value.

Emerging issues and commercial challenges point to:

  • Uncertainty in AI pricing.
  • Difficulty in proving and realising AI return on investment (ROI).
  • Significant costs are associated with data quality, integration, and scaling.
  • Financial liabilities arising from AI errors or misjudgments.

For these reasons, buyers must enhance their organisation’s shared understanding of strategic opportunities from AI, review and adjust budget allocation methods, and update sourcing and supply plans.

b) Under Increasing Usage Volumes, AI Service Providers’ Costs Scale Differently from Software-as-a-Service (SaaS) Providers’

Recent open-access studies provide detailed breakdowns of AI service provider cost drivers, highlighting computer hardware, staff, infrastructure, data, and regulatory costs as significant factors2.

Table A: AI Service Provider Operating Cost Drivers

Cost Bucket Training Cost Impact(Frontier Models3) Service Delivery Cost Impact(Cloud/Enterprise) Regulatory/Operational Cost Impact
Compute (GPU/TPU) Major Major
Staff/Talent Major Major
Server/Networking Moderate Moderate
Energy Consumption Moderate Moderate
Cloud Rental Major (if used) Major
Data Acquisition/Storage Moderate Major
Model Development Major Moderate
Integration/Maintenance Major
Liability/Compliance Moderate to Major Major

As AI models and their applications grow in scale and complexity, these cost drivers become increasingly important for budgeting and strategic planning by both buyers and suppliers. This is because AI operating costs exhibit distinct behaviours under scaling conditions, with critical implications for the profitability of all market participants. The primary factors driving AI costs are:

Software Complexity: this cost is factored by the AI application’s performance requirements and its inherent complexity. More sophisticated and multifaceted AI programs naturally incur higher development and operational expenses.

Data Quality and Availability: the nature of the data significantly impacts costs. Unstructured or fragmented data (e. g., text, images, video) is generally more expensive to work with than structured data due to the increased effort required for processing and preparation. Furthermore, limited data availability and stringent privacy requirements substantially increase overall costs.

Intelligence Objective: the intended purpose and scope of the AI system are crucial cost determinants. Narrow AI, designed for specific tasks (e. g., recommendation systems), is considerably less resource-intensive and generally provides more accuracy within a narrow domain than developing broad-use generative AI, which aims to provide reasoning4 abilities and requires substantially greater investment.

Algorithm Accuracy Requirements: the desired level of precision for the AI’s algorithms directly influences expenditures. Achieving higher accuracy requires more extensive training, validation, and ongoing maintenance, resulting in increased costs.

Table B illustrates the C–V-P implications of suppliers’ AI cost buckets and relative scale economics.

Table B: AI Systems’ C-V-P Dynamics

Cost Component Cost Accounting Scaling Behaviour Measurement Basis
Computer Infrastructure Semi-Variable Fixed base (GPUs/TPUs) + Variable Cloud usage (e. g., AWS/GCP/Azure) Committed vs. on-demand spend; Cloud cost allocation tags
Model Training Step-Fixed Discrete jumps at model size thresholds FLOPs/$ efficiency curves5
Energy Consumption Variable Scales linearly with GPU hours Per-inference kWh tracking
Data Storage/Transfer Variable Linear growth with dataset size/API calls Per-customer data egress monitoring
Talent Step-Fixed Remains stable until hiring thresholds (e. g., 1 engineer/10k daily users) Link headcount to throughput KPIs (e. g., queries/engineer)
Compliance/Security Mixed Fixed base (certifications) + Variable (audits scaling with users) Separate regulatory overhead costs from per-user compliance costs
API Serving Variable Directly proportional to tokens processed/user requests Per-API endpoint cost attribution
Model Maintenance Semi-Variable Base monitoring + retraining spikes Retraining frequency vs. model decay accuracy6

c) Performance-Based Contracting is Advantageous For Aligning AI Supplier Interests with Buyer OKRs

Performance-based pricing directly links payments for AI services to measurable business outcomes or the value those services deliver. This approach properly and fairly allocates the financial risks of the technology solution to the supplier, thus incentivising them to optimise their client solutions and pricing for buyer success.

Key Principles of Performance-Based Contracting for AI:

a) Focus on Buyer Outcomes, Not Just Supplier Outputs or Tasks:

  • Buyer Outcomes: this refers to the specific business result or value the buyer aims to achieve by using the AI solution, measured by organisational goals like increased sales, reduced costs, or improved customer satisfaction. For example, a 30 per cent reduction in customer support resolution time using an AI chatbot is a key buyer outcome.
  • Distinction Between Supplier Output and Task Completion: supplier outputs are tangible deliverables (e. g., number of chatbot conversations handled, models deployed). Task completion is the execution of specific activities (e. g., automated processing of billing and collections actions). While necessary, outputs and task completion don’t always reflect the true business value generated. Effective contracts clearly link suppliers’ activities and financial rewards to buyers’ first-order outcomes of value.

b) Supplier Incentives Align with Buyer OKRs:

  • Collaborative Metric Definition: success metrics should be designed in collaboration with the supplier, focusing on Key Performance Indicators (KPIs) that are most critical to the buyer’s business, such as qualified leads generated, reduction in support ticket resolution time, or increase in sales conversion rates. Baselines must be established to measure improvement.
  • Align KPI’s to Relevant OKRs: When converting key business KPIs into OKRs, set objectives that are aspirational yet achievable, with measurable key results that drive meaningful business impact. Table C provides a practical illustration of these nexuses.

Table C: Mapping KPIs to OKRs

Key Performance Indicator Objective Key Results
Lead Generation Build a high-performing lead generation engine that consistently delivers quality prospects
  • Increase qualified leads by 40% quarter-over-quarter
  • Improve lead-to-opportunity conversion rate from 15% to 25%
  • Achieve a lead quality score of 85% or higher
  • Reduce the cost per qualified lead by 20%
Customer Support Excellence Deliver exceptional customer support that builds loyalty and reduces friction
  • Reduce average support ticket resolution time from 24 hours to 12 hours
  • Increase first-contact resolution rate to 75%
  • Achieve customer satisfaction score over 95% for support interactions
  • Decrease support ticket volume by 15% through proactive solutions and self-service improvements
Sales Performance Optimise the sales funnel to maximise revenue growth and customer acquisition
  • Increase sales conversion rate by 50%
  • Improve average deal size by 25%
  • Reduce sales cycle length by 30%
  • Achieve 95% of quarterly revenue target with 20% coming from upsell/cross-sell

c) Implement a Value-Delivered Payment Structure:

  • Structure supplier fee so that a substantial portion thereof is triggered when the AI achieves agreed-upon outcomes (e. g., $X per qualified lead, $Y per closed sale). This helps ensure that costs scale in proportion to the buyer value realised through business impact.

Next Steps – A Framework for Action

To successfully implement performance-based contracts for AI, consider the following practical steps:

a) Design Clear, Non-Overlapping Payment Structures:

Separate Payment Triggers: utilise a two-part tariff where upfront payments cover access or setup, and subsequent payments are triggered only by specific, measurable outcomes. Each payment should link to a unique, non-redundant supplier action or outcome.

Define Distinct Outcomes: ensure clear differentiation between supplier outputs and buyer desired business outcomes. Payments should be for outcomes that directly advance relevant OKRs, not for intermediary steps or repeated activities.

b) Implement Rigorous Contract Governance and Controls (with clear division of responsibilities between buyer and supplier):

Baseline and Track Performance: before contract initiation, establish baseline metrics and agreed-upon methodologies for measuring supplier contributions to the OKRs. This enables the verification of incremental progress and prevents the double-counting of improvements.

Integrate Contracts into Management Systems: embed contract terms, payment triggers, and performance metrics within the buyer’s contract management and payment systems to reduce manual errors and the risk of duplicate charges/payments.

c) Automate and Audit Payment Processes:

Automated Invoice Matching: employ accounts payable (AP) automation tools and audit trails to match invoices against contract milestones and OKR-linked deliverables. Each payment request must reference the specific outcome achieved.

Regular Audits: conduct periodic audits of payments, cross-referencing them with contract terms and supplier performance data to identify and resolve any duplicate or unjustified payments.

d) Use Contribution Analysis to Attribute Value:

Quantify Supplier Contributions: apply contribution estimation frameworks (e. g., using explainable AI7) to objectively measure each supplier’s unique impact on buyer metrics of interest.

Avoid Double Rewarding: if multiple suppliers contribute to the same outcome, use data-driven attribution models (e. g., SHAP8, XAI) to allocate credit proportionally, preventing duplicate payments for overlapping contributions.

e) Maintain Ongoing Procurement Engagement:

Stay Involved Post-Contract: implement regular reviews with suppliers to validate that payments align with contractually defined commitments and OKRs. Monitor for scope creep, upsells, or new charges not tied to incremental value.

Reiterate and Enforce Key Terms: regularly revisit contract terms to reinforce expectations and deter value leakage or duplicate billing.

f) Establish a Standard Workflow and Approval Process:

Centralise Payment Approvals: route all supplier payment requests through a centralised, standardised workflow that requires validation against contractually defined commitments and OKRs.

Require Supporting Evidence: for each payment, demand substantiating documentation or data demonstrating that the specific outcome has been achieved and that it hasn’t already been paid for.

Advantages For the Buyer:

Cost-Value Alignment: AI costs are directly linked to measurable business value when realised, reducing wasted spend.

Shared Risk: financial risk is appropriately allocated to the supplier, with payment for promised high-value outcomes contingent on proven AI results.

Continuous Improvement: the supplier is motivated to optimise and innovate for buyer success, with significant revenue tied to buyer outcomes.

Budget Predictability: true outcome-based fees, incorporating caps or hybrid models (base fee plus performance bonuses), can help manage budget variability and business user fiscal responsibility (demand management).

Towards the Horizon

Considerations For an Evergreen, Flexible Contract Model:

To foster long-term partnerships and adapt to the evolving nature of AI, implement:

Regular Review and Adjustment Clauses: build in mechanisms for periodic review and adjustment of metrics and pricing to ensure continued relevance and alignment with evolving needs and capabilities.

Continuous Communications and Engagement: continuous disclosures help ensure that all stakeholders understand AI’s SVR progress against target benefits, risks, and required change.

Tiered Performance Incentives: consider tiered incentives for exceeding targets, encouraging suppliers to continuously over-deliver.

Pilot-to-Production Clauses: for new AI initiatives/proof-of-concept innovations, start with a small-scale pilot that has defined success criteria, predictable costs, and clear terms governing how pricing will change upon scaling to full production and in alignment with relevant buyer OKRs.

Footnotes

  1. © Barta Global Services, 2014–2025.
  2. References:
    a) ‘PACT: A Contract-Theoretic Framework for Pricing Agentic AI Services Powered by Large Language Models’, New York University, 2025.
    b) ‘The rising costs of training frontier AI models’, Cottier, Rahman, Fattorini, Maslej, Owen, 2024.
    c) ‘How to Forecast AI Services Costs in Cloud’, FinOps Foundation, 2025.
    d) ‘The Cost of Implementing AI in a Business: A Comprehensive Analysis’, Walturn, 2025.
    e) ‘How Much Does It Cost to Develop an AI Solution? Pricing and ROI Explained’, Coherent Solutions, 2025.
  3. Despite declining per-token training costs, overall AI training remains expensive due to the trend towards larger, more complex models, driving costs of significant computational resources for both training and real-world use.
  4. ‘Understanding ‘Reasoning’ in Generative AI: A Misaligned Analogy to Human Thought,’ IBRS, 2025.
  5. FLOPS efficiency, or Floating Point Operations Per Second efficiency, measures the performance of a computer system in executing floating-point calculations.
  6. Model Decay (aka Model Drift) refers to the gradual deterioration of a machine learning model’s performance post-deployment due to changes in data patterns, environments, or relationships between features and outcomes. Essentially, a once-accurate model starts making more errors because the world it was trained on has shifted.
  7. Explainable AI (XAI) refers to methods and techniques that enable humans to understand and trust the output of machine learning models. XAI is essential for promoting fairness, transparency, and trustworthiness in AI systems.
  8. SHAP (SHapley Additive exPlanations) values are a powerful tool for explainability as they provide a way to measure the contribution of each feature in a model to the final prediction, offering insights into how the model reached a prediction.

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week