VENDORiQ: Oracle Embeds Generative AI Across Technology Stack

Oracle's Generative AI service is now live, providing access to powerful language models. Learn about the implications for the AI industry and the challenges businesses must address.

The Latest

February 2024: Oracle’s Generative AI service is now live, providing access to large language models (LLMs) like Cohere and Meta Llama 2 for a range of business tasks. This fully-managed service on Oracle Cloud Infrastructure supports over 100 languages. It allows enterprise users to fine-tune models with their own data using retrieval-augmented generation (RAG) techniques. It can be deployed across the Cloud or on-premise with OCI Dedicated Region.

Why It’s Important

As predicted by IBRS, more Software-as-a-Service (SaaS) vendors are rapidly embedding AI into their solutions, which enables the adoption of AI across organisations. With this, AI governance remains critical. 

Upskilling or acquiring talent will also be critical, since skills gaps will grow in the future. Using AI in technology requires both technical knowledge of AI and expertise in the specific business areas where it is used. 

Enterprise architects also need to shift their focus from infrastructure to business and information architecture. They need to develop to-be models that allow frequent business changes cost-effectively. This includes identifying ways to migrate legacy systems to SaaS and drop customisation. 

LLM models require significant data training, raising concerns about data security and privacy. Oracle’s approach needs to ensure user trust and compliance with relevant regulations.

Multi-Cloud strategies also remain essential. While Oracle’s Cloud may suit existing customers, its AI capabilities may lack the breadth of hyperscalers. However, using multiple Clouds adds complexity. CIOs need to evaluate their options based on business needs and consider a multi-cloud strategy to avoid vendor lock-in.

Who’s Impacted

  • CEO
  • AI developers
  • IT teams

What’s Next?

  • CIOs need to review their AI strategies to ensure ethical use of AI, and find a balance between adopting AI for business benefits and managing risks like bias.
  • Before formulating strategies, enterprises should recognise that pre-training, fine-tuning, or continuously training LLMs on corporate knowledge can pose a time-consuming and costly challenge.
  • Information architects need to design models to predict future needs based on data analysis and collaboration with business stakeholders and service providers.

Related IBRS Advisory

1. VENDORiQ: Oracle Introduces Oracle Database@Azure with Microsoft

2. Is AI Knowingly Embedded in Your ITSM Strategy?

3. Taking Control of Enterprise Large Language Model AI

Trouble viewing this article?

Search