Understanding ‘Reasoning’ in Generative AI: A Misaligned Analogy to Human Thought

Generative AI mimics reasoning through data-driven pattern matching, diverging fundamentally from human cognitive reasoning. The misconception leads to costly mistakes in how businesses apply AI.

Conclusion

Generative artificial intelligence’s (GenAI’s) reasoning is a simulation of thought, relying on probabilistic pattern matching and what amounts to database-driven processes. While it can provide valuable insights, it lacks the comprehension, adaptability, and goal-driven intentionality of human reasoning. Business leaders must understand these distinctions to set realistic expectations and integrate artificial intelligence (AI) into decision-making processes. In addition, we also need to integrate failsafe mechanisms into AI-assisted decision-making. Agentic AI will not solve these problems.

Observations

As GenAI becomes increasingly integrated into business operations, leaders need to understand what is meant by its reasoning capabilities. Misconceptions about GenAI’s reasoning lead to challenges in interpreting its outputs. Without a clear grasp of these factors, businesses risk misusing AI, leading to poor decision-making, inefficiencies, and reputational harm.

Therefore, it is essential to understand that the term reasoning in the field of GenAI as it differs significantly from reasoning as humans normally consider it in a business context. The following definitions highlight the differences:

Defining Reasoning in Human and Generative AI Contexts

Human Reasoning GenAI Reasoning Implications
Human reasoning is a conscious, goal-driven cognitive process involving subjective understanding, adaptability, and the ability to draw conclusions based on context and implications. It is characterised by:Comprehension: The ability to understand underlying concepts and filter irrelevant details.

Intentionality: A goal-oriented approach to problem-solving.

Adaptability: The capacity to adjust reasoning to new or complex situations.

Awareness: A subjective and conscious engagement with the reasoning process.

Experience: humans bring their past experiences and culture into their thinking, which contributes to an often unconscious bias in reasoning. This bias can be a boon by shortening the reasoning process and a curse by limiting or skewing the reasoning.

GenAI reasoning is a probabilistic, database-driven process that simulates reasoning by recognising and generating patterns from training data. It lacks true comprehension, awareness, or intentionality. Key characteristics include:Pattern Matching: AI predicts responses based on patterns in its training data.

Database-Driven: AI relies on specialised databases, such as vector databases, to retrieve and recombine information.

Simulation of Thought: Techniques like chain-of-thought prompting and inference-time computing enhance the appearance of reasoning but do not equate to human-like understanding. They are still, at their foundation, probabilistic.

Agentic AI must adopt a show your work principle.Agentic AI reasoning must be explicitly visible to developers and staff, clearly illustrating the questions (prompts) it poses at each stage of the reasoning, the information collected, curated, and summarised, and how this is used in the final output.

The Impact of AI Misconceptions in the Workplace

One of the most pervasive misconceptions is that AI can analyse like humans. In reality, AI systems rely on probabilistic pattern matching and database-driven processes rather than true cognitive understanding. For example, large language models (LLMs) generate responses based on patterns in their training data, not on comprehension or intentionality. This misunderstanding can lead to overestimating AI’s capabilities, such as assuming it can handle complex, context-sensitive decisions.

Another common myth is that AI systems are accurate. While AI can process vast amounts of data and identify patterns, it is not immune to errors. Issues such as token bias, where small changes in input can drastically alter outputs, and hallucinations, where AI generates incorrect or nonsensical information, highlight its limitations. These errors can mislead decision-makers who assume AI-generated data is sufficiently reliable.

The belief that AI can replace human judgment is both misleading and dangerous. While AI excels in data analysis and pattern recognition, it lacks the conscious and social understanding required for nuanced decision-making. For instance, AI does not account for ethical considerations, emotional intelligence, or the broader implications of its outputs. While efforts are being made to overlay guardrails that aim to give AI more human-like judgement, these approaches do not address the foundation issues and lead to new challenges, such as jailbreaking AI.

Some business leaders view AI primarily as a means to reduce costs by automating tasks. While AI can improve efficiency and reduce operational expenses, its value lies in augmenting human capabilities. Focusing solely on cost-cutting risks underutilising AI’s potential and alienating employees who fear job displacement.

Challenges in Understanding AI Reasoning

AI reasoning is fraught with both technical and conceptual challenges. These difficulties can hinder trust, transparency, and effective integration of AI into business processes.

Lack of Explainability: AI systems, particularly those using deep learning, often function as black boxes where the decision-making process is opaque. This lack of transparency makes it difficult for users to understand how AI arrives at specific conclusions. For example, in high-stakes environments like healthcare or finance, the inability to explain AI decisions can result in regulatory and ethical concerns.

Data Bias and Generalisation: GenAI models are only as good as the data they are trained on. If the training data is biased, the AI’s reasoning will reflect those biases, leading to unfair or inaccurate outcomes. Additionally, AI systems often struggle with generalisation, meaning they may perform well on training data but fail in new or unfamiliar scenarios. This limitation is particularly problematic in dynamic business environments where adaptability is crucial.

Token Bias and Probabilistic Reasoning: in AI, tokens are the fundamental building blocks of storing and referencing information. GenAI’s reliance on probabilistic pattern matching introduces challenges such as token bias, where minor changes in input can lead to drastically different outputs. This unpredictability complicates the use of AI in decision-making, as leaders cannot always anticipate how the system will respond to varying inputs. For example, an AI system might misinterpret ambiguous or incomplete data, leading to erroneous conclusions.

Ethical and Moral Reasoning: AI systems cannot navigate ethical dilemmas or moral reasoning, which are inherently human capabilities. However, researchers have shown that purpose-built LLMs can mimic involved ethical discussions, though this type of application suffers from the previous three challenges. This limitation poses significant challenges in areas where ethical considerations are paramount, such as employee management, social policy development, justice, and public service delivery.

Best Practices

Despite its limitations, AI offers significant opportunities for enhancing business decision-making. By adopting best practices, organisations can maximise AI’s benefits while mitigating risks.

Establish AI Oversight: implement a research ethics working group that monitors AI opportunities and implements governance to ensure the organisation engages AI within agreed ethical and trusted AI guardrails. One such guardrail is watching out for the false analogy to human thought (anthropomorphism) in agentic AI projects.

Overall, this could help to implement and monitor the other best practices.

Cultivate a Data-Driven Culture: encouraging a culture of data literacy and exploration is essential for leveraging AI effectively. This involves training employees to understand and interpret data, foster curiosity, and challenge assumptions. A data-driven culture ensures AI insights are used to inform decisions rather than replace human judgment.

Embrace AI as a Partner: AI should be viewed as a tool to augment human capabilities rather than as a replacement for them. For example, AI can analyse large amounts of unstructured information to uncover patterns and themes, providing insights that might be missed by human analysis alone. Integrating AI into strategic planning can enhance decision-making and operational efficiency.

Determining Human and AI Contributions: AI should complement human decision-making, not replace it. For example, while AI can handle routine, research and data-intensive tasks, humans should oversee decisions requiring empathy, creativity, or ethical considerations. Knowing where and when to use AI reasoning and where and when not to will become a significant point of differentiation for businesses. This collaborative approach ensures that AI’s strengths are leveraged without compromising human values

Focus on Explainability and Transparency: implementing explainable AI (XAI) frameworks can make AI decision-making processes more transparent, building stakeholder trust. Regular audits of AI systems can identify and mitigate biases, ensuring that outputs are fair and reliable. Transparency is crucial in regulated industries where accountability is critical.

Explore Advanced AI Techniques: techniques such as chain-of-thought prompting and inference-time compute can improve AI’s reasoning capabilities. These methods encourage AI to articulate its reasoning process, providing more accurate and interpretable outputs. Investing in these advancements can enhance AI’s utility in complex decision-making scenarios.

Next Steps

  1. Clarify Terminology and Educate Stakeholders: develop training programmes to educate business leaders and employees on the distinctions between human and AI reasoning. Emphasise that AI reasoning is a simulation based on pattern recognition, not true cognitive understanding.
  2. Set Realistic Expectations for AI Integration: define clear roles for AI in decision-making processes, focusing on its strengths in data analysis and pattern recognition. Avoid over-reliance on AI for decisions requiring empathy, creativity, or deep contextual understanding.
  3. Enhance Transparency and Accountability: implement XAI frameworks to make AI decision-making processes more transparent. Adopt AI models that provide high levels of transparency and explainability (following the IBRS AI Vendor Trust Framework).
  4. Balance Human and AI Contributions: use reasoning AI solutions to augment human decision-making, not replace it.
  5. Invest in Advanced AI Techniques: explore advancements like chain-of-thought prompting and inference-time compute to improve AI’s problem-solving capabilities.

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week