Oracle has a better generative AI strategy, analysts say

Oracle’s recent updates to its OCI Generative AI Service, which competes with similar services from rivals AWS, Microsoft, and Google, makes it more attuned towards the future need of enterprises, analysts say. But Oracle may be far behind rivals in terms of overall generative AI model and service offerings.

“Oracle can offer enterprises a more streamlined approach to lowering the expense and resource or time commitment to pre-train, fine-tune, and continuously train large language models (LLMs) on an enterprise’s knowledge or data, which has proven to be an obstacle in many of today’s enterprise environments, with the exception of some call center and customer experience support applications,” said Ron Westfall, research director at The Futurum Group.

Oracle’s differentiation, according to Westfall, lies in its ability to drive generative AI innovation across an enterprise because it draws on a vast array of built-in portfolio capabilities across its Fusion and NetSuite applications, AI services and infrastructure, and machine learning (ML) for data platforms, such as MySQL HeatWave Vector Store and AI Vector Search in Oracle Database.

While AWS and Google seem to be unable to counter the diverse array of Oracle’s enterprise applications directly in-portfolio, rivals like IBM trail Oracle in terms of offering a cloud database globally, Westfall explained.

Lowering cost and complexity is key

Westfall believes that Oracle offers sharp price performance differentiators across key cloud database categories that can ease generative AI adoption and scaling in such environments.

Expanding on Westfall’s premise, Bradley Shimmin, chief analyst at research firm Omdia, said that Oracle is trying to integrate fundamental elements of generative AI into its basic offerings, especially databases, to optimize compute resources and bring down cost.

In the wake of breakneck innovation around generative AI, technology service providers such as Oracle understand that optimizing the use of hardware matters when deploying an AI model with billions of parameters at scale with a tolerable degree of latency, according to Shimmin.

“We are experiencing the same sort of wake-up call in the adjacent areas of data and analytics as well, particularly as databases and data processing tools start to play an increasingly important role in supporting generative AI-based offerings, as is the case with use cases like retrieval-augmented generation,” Shimmin said.

While it is one thing to build a basic retrieval-augmented generation (RAG) pipeline capable of indexing a few PDFs to support a single-user LLM, it’s a whole new challenge to implement RAG for petabytes of continually evolving corporate data, and to deliver insights from that data to a global audience in under a millisecond, the analyst explained.

“It’s no surprise, then, to see so many database vendors, such as MongoDB, adopting in-database machine learning capabilities and more recently building, storing, and retrieving vector embedding within the same database where the data being vectorized lives. It’s all about minimizing complexity and maximizing spend,” Shimmin said.

The underlying principle is to cut down on the movement of data between two databases, between databases and storage media, and between storage media and model inferencing chips.

Further, Shimmin said that enterprises, in many cases, may have to maintain two separate databases, one for vectors and one for source data, which will be expensive as they will have to pay a price in managing the data integration and latency between the two.

“Companies like Oracle, which have sought to optimize their cloud infrastructure from database processing all the way down to chip networking and data retrieval, seem well-positioned to provide differentiated value to their customers by lowering complexity while elevating performance,” the analyst explained.

‘Far behind’ rivals in models and services

While Oracle’s strategy may appeal to enterprise customers, Andy Thurai, principal analyst at Constellation Research, believes that Oracle is “far behind” its rivals when compared on the basis of overall generative AI offerings.

“Oracle’s option of providing use-as-you-need hosted service competes against a lot more powerful offering from AWS, which has more options and functions compared to OCI’s offering,” Thurai said. Thurai also noted that Oracle has a dearth of LLMs and these are limited in their use when compared to its rivals.

However, Thurai maintains that Oracle’s choice to use the generative AI service in Oracle Cloud and on-premises via OCI dedicated region is a somewhat unique proposition that might be interesting to some large enterprise customers, especially the ones in regulated industries.

“The option to integrate with Oracle’s ERP, HCM, SCM, and CX applications running on OCI could make this more attractive, if priced right, for their user base,” the analyst said, adding that failure to do so would help AWS take a more favorable position with enterprise customers.

What’s new in the OCI Generative AI Service

Oracle has been rolling out its three-tier generative AI strategy across multiple product offerings for the better part of a year. The company released the OCI Generative AI Service in beta preview in September. Today Oracle has introduced new models from Cohere and Meta, new AI agents, a new low-code framework for managing open source LLMs, and made the service generally available.

The new models include the likes of Meta’s Llama 2 70B, a text generation model optimized for chat use cases, and the latest versions of Cohere models, such as Command, Summarize, and Embed. These models will be available in a managed service that can be consumed via API calls, Oracle said in a statement, adding that these models can also be fine-tuned via the updated service.

An AI agent for retrieval-augmented generation

In addition to the new models, Oracle has added new AI agents to the service to help enterprises make the most of their enterprise data while using large language models and building generative AI-based applications.

The first of the AI agents introduced in beta is the RAG agent. This agent, which works similarly to LangChain, combines the power of LLMs and enterprise search built on OCI OpenSearch to provide contextualized results that are enhanced with enterprise data, said Vinod Mamtani, vice president of OCI’s Generative AI Services.

When an enterprise user inputs a natural language query into the RAG agent via a business application, the query is passed to OCI OpenSearch, which is a form of vector or semantic search. OCI OpenSearch in turn reads and collects relevant information from an enterprise’s data repository. The search results are then ranked by a ReRanker LLM, which passes the ranking on to a text generation LLM, which answers the query in natural language.

The text generation LLM has checks to ensure that the returned response is grounded or in other words suitable for consumption by the user. If the returned query fails to meet the grounding requirements, the loop runs again, the company said, adding that this eliminates the need for specialists such as developers and data scientists.

“The information retrieved is current—even with dynamic data stores—and the results are provided with references to the original source data,” Mamtani explained.

rag infra oracle Oracle

Upcoming updates, expected to be released in the first half of 2024, to the RAG agent will add support for a wider range of data search and aggregation tools and also provide access to Oracle Database 23c with AI Vector Search and MySQL Heatwave with Vector Store.

Other capabilities, which also will be released around the same time frame, include the ability to create an AI agent from within the OCI console. A user will be able to create an agent by specifying the task they need done and attaching it to a data source, Mamtani said, adding that these agents will use either the Llama 2 or the Cohere LLMs by default.

AI agents based on the ReAct framework

These AI agents, according to Oracle, are being created with the help of the ReAct paper published by researchers from Princeton University and Google. Agents use the ReAct framework to reason, act, and plan based on a series of thoughts, actions, and observations.

Mamtani said these capabilities will allow the agents to go beyond information retrieval tasks and call APIs on behalf of the user as well as automate other tasks. Oracle also plans to add multi-turn agents to the service that can be asked to retain the memory of past interactions to further enrich the model context and its responses.

Most of these agents and their actions, according to the company, can be added to its suite of SaaS applications, including Oracle Fusion Cloud Applications Suite, Oracle NetSuite, and industry applications such as Oracle Cerner.

Additionally, in an effort to help enterprises use and manage LLMs with open source libraries, Oracle is adding a new capability to its OCI Data Science offering, dubbed the AI Quick Actions feature. This feature, which will be in beta next month, enables no-code access to a variety of open-source LLMs.

Copyright © 2024 IDG Communications, Inc.

Source