VENDORiQ: We called it! Cash-Strapped AI Vendors Add Ads. Commercial Bias in 3…2…1

OpenAI’s pivot to advertising signals inevitable commercial bias, compromising output integrity and privacy. Executives must audit free-tier usage and prioritise vendor independence.

The Latest

OpenAI has started testing ads in ChatGPT, showing sponsored content in clearly marked grey boxes below AI responses. The company says ads will not affect ChatGPT’s answers, user chats stay private, and users can turn off ad personalisation and delete ad data. Ads are aimed at logged-in adults using the free and ‘Go’ subscription levels. Early advertisers include Adobe, Audible, HelloFresh, Ford, and Mazda. OpenAI CEO Sam Altman once called advertising a ‘last resort’ for making money, but now says it is needed to make AI services available to people who cannot pay for subscriptions.

Why it Matters:

Australian and New Zealand organisations should take a closer look at ads being added to ChatGPT and other AI tools. While OpenAI presents this as making AI more accessible, the way ad-supported models work often leads to outcomes that do not match these ideals.

Right now, OpenAI keeps ads separate from responses and clearly labels them as sponsored. But history shows a pattern with ad-supported platforms. For example, Google’s search results changed over fifteen years. From 2000 to 2010, sponsored links were in yellow boxes with clear labels. By 2025, Google’s results will show ads that look almost the same as regular answers. This happened because users tend to ignore obvious ads but click on content that looks like answers.

As ad revenue became integral to Google’s revenue targets, optimising for click-through rates naturally erodes the boundaries. OpenAI’s current approach is similar to how Google started out. 

IBRS expects other AI companies to follow OpenAI’s example. In late 2025, IBRS asked Microsoft if ads might appear in Windows Copilot’s recommendation services. Microsoft said ads were not used in the free version of Copilot and there were no immediate plans, but they are considering all options for the future.

Significant pressures may challenge OpenAI’s and other AI vendors’ promises about being transparent.

  • First, advertising models create a basic conflict of interest. As IBRS research on privacy policy templates in the AI era points out, organisations often share sensitive data with AI systems, such as medical worries, relationship issues, or financial concerns. This information can be used for targeting. Even if ads do not directly affect AI responses, the push to boost engagement and ad views can quietly change how products are developed. This is similar to what happened at Facebook, where privacy promises weakened over time because the ad model rewarded engagement above everything else. Whilst Anthropic has left itself an ‘out’ by stating it may reconsider this position if circumstances change, the company has correctly identified that advertising economics create directional pressure toward lower-quality utility. If and when OpenAI becomes a publicly traded firm, investor pressure to grow advertising revenue year on year will eventually conflict with maintaining user trust and response quality.
  • Second, it is important to look at the real costs of using AI. IBRS’s analysis of ‘The AI Cost Iceberg’, 2026, shows that the visible price of AI services hides many hidden costs, especially the ‘verification tax’, the extra work needed to check and fix AI mistakes. Ad-supported models that focus on engagement instead of accuracy will likely make this burden heavier for organisations using ChatGPT for important business tasks. IBRS’s AI productivity paradox analysis also covers this issue. This is especially important for the future, as pressure to boost engagement may lower the quality of AI outputs.
  • Third, privacy is a key concern for Australian organisations. Recent changes to the Australian Privacy Act set strict rules for anyone collecting and handling personal information. People using ChatGPT might share sensitive details. Even if OpenAI keeps its current protections, the rules for using data in advertising, especially about consent and transparency, are still not fully settled.
  • Fourth, AI models also use ‘reinforcement learning from human feedback’ (RLHF). If ads start to make responses less useful or satisfying, the model’s performance scores will fall. This gives companies a reason to keep ads separate from the main reasoning part of the AI, so they do not lose users to competitors like Anthropic that do not use ads.

Who’s Impacted?

  • Chief Information Officers and Chief Technology Officers: Are responsible for choosing vendors and setting up data governance. They need to decide if the new AI advertising model fits their organisation’s risk tolerance and whether internal policies should be updated to limit free-tier use for sensitive tasks.
  • Chief Data Officers and Privacy Officers: Need to consider what it means for user data to be handled in an advertising system. Recent changes to the Australian Privacy Act add new rules about consent and transparency, which ad-supported models may make more complicated.
  • Procurement and Vendor Management Teams: Must scrutinise the terms of service for AI services, particularly regarding data retention, usage rights, and the scope of advertising personalisation. Contract negotiations should address contingencies if a previously selected AI platform expands its revenue to include advertising.
  • Development and AI Teams: Need to be aware that AI responses could soon be shaped by incentives to boost engagement, which might affect reliability for important uses. Teams should set up processes to check critical outputs, especially when using ad-supported models.
  • Heads of Legal and Compliance: Should review the legal and regulatory risks of staff using ad-supported AI platforms, especially in terms of data protection, intellectual property, and meeting Australian privacy law requirements.

Next Steps:

  • Audit how ChatGPT is currently used: Make a list of all internal uses, especially where the free or Go tiers are used for sensitive company or client data. Mark high-risk cases for immediate policy review.
  • Review and update acceptable use policies: Make sure they match IBRS’s Safe AI Usage Policy Template and clearly limit free-tier access to non-sensitive tasks. Spell out data handling duties and consent rules under the Australian Privacy Act.
  • Set up verification protocols: As AI advertising grows, it will become even more important to check all AI-generated outputs used in important decisions.
  • Keep track of changes at OpenAI and other AI vendors: Set up quarterly reviews to check for updates in how ads are used, terms of service, and privacy controls. If ads become more common or harder to spot, review your internal usage policies more closely.
  • Diversify AI vendor strategy: As needed, reduce reliance on ad-supported models by exploring alternative large language models with subscription or consumption-based pricing. This aligns with IBRS guidance on model independence and vendor resilience.
  • Do a compliance review: Work with legal, HR, and privacy teams to check if staff use of free-tier ChatGPT or other AI tools creates a compliance risk under the Australian Privacy Act or industry rules. Record your findings and share any limits with users.
  • Set up an ‘AI Commercial Bias Verification Framework’: RLHF can help by training AI models to focus on being helpful and accurate, and by discouraging misleading or unhelpful outputs. But as more vendors add ad-supported options, there is a risk that commercial goals will push models to favour engagement or sponsored content over real usefulness. A formal framework should include regular audits of high-risk cases, routine reviews of vendor terms to spot hidden ad integration, and require human checks to catch commercial bias that might slip past normal safeguards.

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week