VENDORiQ: Salesforce on the Agent-Hype Train with Tableau Einstein
Uncover the truth behind Salesforce’s Tableau Einstein integration with Agentforce, revealing insights on the gap between marketing promises and technological capabilities.
Uncover the truth behind Salesforce’s Tableau Einstein integration with Agentforce, revealing insights on the gap between marketing promises and technological capabilities.
The generative AI (genAI) hype cycle is currently experiencing its trough of disillusionment, particularly in the application of retrieval augmented generation pipelines to enterprise applications. Despite numerous attempts, these systems have struggled to reduce hallucinations in output to levels acceptable for enterprise use. However, amidst this challenging period, a promising approach is emerging: the fusion of knowledge graphs into AI applications. Will it deliver?
DevOps. MLOps. LLMOps. Do IT teams need yet another buzzword? While it may be tempting to think of large language model ops (LLMOps) as a subset of machine learning ops (MLOps), that would prevent us from exploiting the real benefits of building enterprise applications with large language models (LLMs) – scale and speed of development.
This paper outlines the key technological developments driving changes in retrieval augmented generation (RAG). Mapping those changes on a non-linear trajectory, we predict the near future, and provide actionable recommendations for organisations to stay ahead of the curve in the rapidly evolving landscape of enterprise artificial intelligence (AI).
When all you have are large language models (LLMs), everything looks like a prompt. Enterprises must avoid falling into the trap of the law of the instrument, a.k.a. Maslow’s Hammer.
Maximise your artificial intelligence (AI) investments by strategically aligning platform choice with your organisation’s AI maturity level.
To truly harness the transformative power of artificial intelligence (AI), enterprises must shift their focus from standalone AI applications to comprehensive AI systems that are deeply integrated into their existing workflows and processes.
Embracing large language models as an integration layer can enable enterprise application development by providing a natural language interface between diverse systems, APIs, and human users. This approach allows seamless data flow, intuitive user experiences, and rapid prototyping of complex applications, dramatically reducing development time and costs while opening new avenues for innovation and process optimisation across the organisation.
Artificial intelligence (AI) bias threatens to undermine the fairness and effectiveness of enterprise AI solutions. We examine key types of AI bias, their real-world impacts, and practical mitigation strategies – focusing on examples from Australia, New Zealand, and India. Learn key approaches to develop more equitable AI systems that serve diverse populations and unlock the full potential of AI for your business.