OpenAI ramps up enterprise support with a focus on security, control, and cost

OpenAI, known for its large language model ChatGPT, is making a strong push for the enterprise market. In what could intensify competition among enterprise AI players, the company announced a slew of new features designed to give businesses more control, enhance security, and offer cost-effective options when integrating OpenAI’s AI technologies in their operations.

“We’re deepening our support for enterprises with new features that are useful for both large businesses and any developers who are scaling quickly on our platform,” OpenAI said.

Among the key upgrades is the introduction of Private Link, a feature that enables direct communication between a customer’s cloud infrastructure such as Microsoft Azure and OpenAI while minimizing its exposure to the open internet, thus reducing cybersecurity threats. OpenAI also now offers multi-factor authentication (MFA), which provides an added layer of security for user accounts.

“These are new additions to our existing stack of enterprise security features including SOC 2 Type II certification, single sign-on (SSO), data encryption at rest using AES-256 and in transit using TLS 1.2, and role-based access controls,” OpenAI said in the announcement. “We also offer Business Associate Agreements for healthcare companies that require HIPAA compliance and a zero data retention policy for API customers with a qualifying use case.”

A new Projects feature promises to give organizations greater oversight and control over individual projects.

“This includes the ability to scope roles and API keys to specific projects, restrict/allow which models to make available, and set usage- and rate-based limits to give access and avoid unexpected overages. Project owners will also have the ability to create service account API keys, which give access to projects without being tied to an individual user,” the company said.

OpenAI has also announced “two new ways” to help businesses manage their expenses while scaling up their AI usage. It has offered discounted usage (up to 50%) options for committed throughput and reduced costs for asynchronous workloads.

“Customers with a sustained level of tokens per minute (TPM) usage on GPT-4 or GPT-4 Turbo can request access to provisioned throughput to get discounts ranging from 10% to 50% based on the size of the commitment,” the announcement noted. Similarly, the company has introduced a Batch API, at a 50% reduced price, specifically for non-urgent tasks. “This is ideal for use cases like model evaluation, offline classification, summarization, and synthetic data generation,” the company said.

Developers working with OpenAI’s Assistant API will also see several improvements. Notably among them is improved retrieval with “file_search,” which can ingest up to 10,000 files per assistant, up from the previous file limit of 20. “The tool is faster, supports parallel queries through multi-threaded searches, and has enhanced reranking and query rewriting,” OpenAI said.

Copyright © 2024 IDG Communications, Inc.

Source