Oracle and NVIDIA broadened their collaboration at Oracle AI World with a set of announcements that cover towering new compute hardware and deep software integration designed to fold AI directly into a company’s data layer.
Ian Buck, VP of Hyperscale and High-Performance Computing at NVIDIA, said: “Through this latest collaboration, Oracle and NVIDIA are marking new frontiers in cutting-edge accelerated computing—streamlining database AI pipelines, speeding data processing, powering enterprise use cases and making inference easier to deploy and scale on OCI.”
The most eye-catching piece of kit is the OCI Zettascale10 computing cluster. Built around NVIDIA GPUs, the system is intended for heavy-duty AI training and inference tasks that would overwhelm standard servers. Oracle describes peak AI capacity for the platform at 16 zettaflops, and it pairs that compute with NVIDIA’s Spectrum-X Ethernet, a network fabric tuned to keep GPUs fed with data so organizations can scale up to millions of processors without starving accelerators for work.
Power alone is only one part of the story. The collaboration includes multiple layers of integration intended to embed AI into operational workflows and analytic stores, cutting friction between raw data and model-driven outcomes.
Mahesh Thiagarajan, Executive VP of Oracle Cloud Infrastructure, commented: “OCI Zettascale10 delivers multi‑gigawatt capacity for the most challenging AI workloads with NVIDIA’s next-generation GPU platform.
“In addition, the native availability of NVIDIA AI Enterprise on OCI gives our joint customers a leading AI toolset close at hand to OCI’s 200+ cloud services, supporting a long tail of customer innovation.”
A central element of the announcement is Oracle AI Database 26ai. Oracle is pushing a model where AI comes to the data rather than moving business data into external model training environments. The database is positioned as the place to run models against operational systems and analytic lakes while keeping enterprise data in place and under local control.
Juan Loaiza, Executive VP of Oracle Database Technologies at Oracle, said: “By architecting AI and data together, Oracle AI Database makes ‘AI for Data’ simple to learn and simple to use. We enable our customers to easily deliver trusted AI insights, innovations, and productivity for all their data, everywhere, including both operational systems and analytic data lakes.”
One notable capability is agentic AI workflows that execute inside the database. Those agents can answer complex queries by combining sensitive corporate data with public sources without moving private material out of the secure environment. That is supported by a Unified Hybrid Vector Search that lets models search for context across formats — relational tables, JSON documents, spatial datasets and more — so a single query can pull together the pieces needed to respond accurately.
Security is front and center for that approach. Oracle said the new database supports NIST-approved quantum-resistant cryptography for data in-flight and at-rest, a protection intended to limit the risk from so-called “harvest now, decrypt later” attacks in which encrypted material is stolen today with the aim of decrypting it when quantum systems become powerful enough.
Holger Mueller, VP and Principal Analyst at Constellation Research, commented: “Great AI needs great data. With Oracle AI Database 26ai, customers get both. It’s the single place where their business data lives—current, consistent, and secure. And it’s the best place to use AI on that data without moving it.
“To help simplify and accelerate AI adoption, AI Database 26ai includes impressive new AI features that go beyond AI Vector Search. A highlight is Oracle’s architecting agentic AI into the database, enabling customers to build, deploy, and manage their own in-database AI agents using a no-code visual platform that includes pre-built agents.”
The database’s programming interfaces are being connected with NVIDIA’s developer tools. Oracle said database APIs can plug into NVIDIA NeMo Retriever, a set of microservices that handle the retrieval plumbing enterprises need for modern AI. That link is intended to make retrieval-augmented generation workflows easier to implement; with RAG, a language model is able to fetch pertinent facts from corporate documents before producing an answer, which tends to make outputs more accurate and practical for real business tasks.
Oracle also plans to boost its Private AI Services Container with GPU offload support. Customers running models in their own protected environments will be able to hand the compute-heavy task of creating vector embeddings to NVIDIA GPUs using the cuVS library, cutting the time needed to prepare data for AI search and related uses.
Outside the database, the partners are simplifying the data-processing and training pipeline. Oracle AI Data Platform now offers an option with native NVIDIA GPU access plus the NVIDIA RAPIDS Accelerator for Apache Spark. That lets data scientists and ML engineers speed up data preparation and model workflows on GPUs, often without having to rewrite code written for Spark on CPUs.
Many of those components are being gathered under the Oracle AI Hub, a centralized control plane that Oracle positions as the place organizations can assemble, deploy, and run AI solutions. From the hub, customers can deploy NIM microservices from NVIDIA — pre-packaged model components that handle common tasks — through a visual, no-code interface that aims to lower operational complexity.
To reduce procurement friction, the full NVIDIA AI Enterprise software suite is now accessible natively in the OCI Console. Developers can provision a GPU instance and switch on the NVIDIA toolset from within the same cloud interface, shortening setup time and removing the need for separate vendor agreements or lengthy provisioning steps.
Taken together, the announcements wrap new compute scale, database-level model capabilities, integrated developer tooling, and safer handling for sensitive enterprise data into a single set of interoperable offerings. Oracle and NVIDIA contend that aligning hardware, the data layer, and the model services into one operational environment will make it easier for businesses to adopt AI at scale while keeping control over where their information lives.

