VENDORiQ: Will the rush to AI Fragment the HCM Landscape?

Acquisitions like Workday-Paradox risk fragmenting HCM with new data silos, demanding unified AI governance and local compliance scrutiny.

The Latest

Workday has completed its acquisition of Paradox, an AI company specialising in conversational AI for candidate experience, particularly in frontline industries. This integration aims to enhance Workday’s talent acquisition suite by combining Paradox’s capabilities with Workday Recruiting and HiredScore. The objective is to streamline the hiring process, enhance recruiter efficiency, and provide an improved candidate experience.

Why it Matters

The increasing integration of AI services into human capital management (HCM) solutions, as exemplified by Workday’s acquisition of Paradox, presents both opportunities and challenges for organisations. While leveraging AI-driven solutions for HCM activities can enhance efficiency in various areas of workforce planning, recruitment, training, management, and more, there is a risk that such technologies may inadvertently fragment human resource solutions. 

This fragmentation, reminiscent of issues faced around 2010, could lead to the creation of new data silos and inconsistencies in data governance. 

A key concern for HR and ICT executives is ensuring a unified approach to AI implementation. Without close collaboration, different AI services (especially agentic) within HCM solutions may operate independently, hindering a holistic view of human capital data and potentially complicating compliance. 

Workday’s new Agent System of Record, designed to manage AI agents from various sources, aims to address this by providing a centralised system for orchestrating AI functionalities. This type of capability could mitigate fragmentation risks if implemented effectively and consistently.

Furthermore, the legal and cultural context of AI deployment is critical. Many AI services are developed for North American markets, and their functionalities may not align with Australian regulatory frameworks or cultural norms. For instance, AI-driven candidate assessment tools need careful scrutiny to ensure they do not introduce bias or contravene local anti-discrimination laws. AI services that monitor and report on staff activities are also areas of concern in Australia, but to a lesser extent in the US. 

The Australian context requires a thorough examination from both a legal and cultural perspective to avoid unintended consequences. In particular, ensuring equitable and transparent hiring practices when using AI is critical. Organisations must conduct due diligence to verify that AI solutions comply with local privacy regulations and employment laws, as well as their own AI governance policies.

Who’s Impacted?

  • Chief Information Officer (CIO): Responsible for ensuring seamless integration of AI solutions, maintaining data integrity across platforms, and establishing robust AI governance frameworks to prevent fragmentation and manage security risks.
  • Chief Human Resources Officer (CHRO): Accountable for evaluating the efficacy of AI in HR processes, ensuring legal compliance (especially concerning discrimination and privacy), and preserving a human-centric approach to talent management.
  • Head of HR Technology/HRIS Manager: Needs to assess the technical implications of integrating new AI agents, manage vendor relationships, and ensure that AI solutions support, rather than complicate, existing HR systems.
  • Legal Counsel/Compliance Officer: Must review AI functionalities for compliance with labour laws, data protection regulations (e.g., Australian Privacy Principles), and ethical guidelines related to algorithmic decision-making.
  • Chief Data Officer (CDO): Focused on data quality, consistency, and the ethical use of data by AI agents, ensuring that data relevant to HR is not siloed and supports comprehensive analytical capabilities.

Next Steps

  • Establish a cross-functional AI governance committee: Include representatives from HR, ICT, legal, and data teams to define policies for AI deployment, ethical use, and data management.
  • Conduct legal and ethical reviews: Ensure all AI solutions comply with Australian labour laws, privacy regulations, and cultural considerations, particularly concerning bias detection and mitigation in algorithmic decision-making.
  • Prioritise integration and interoperability: Demand that new AI-driven HCM solutions integrate seamlessly with existing systems to avoid data silos and fragmented processes. Leverage centralised management platforms for AI agents where available. 
  • Where possible, prioritise AI capabilities being built into existing HCM solutions: Most HCM vendors are rapidly adding AI services into their solutions. 
  • Pilot and evaluate AI capabilities in controlled environments: Before widespread adoption, test AI capabilities with specific use cases to assess their effectiveness, identify potential issues, and gather feedback from end-users.

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week