Hewlett Packard Enterprise Deepens Integration With Nvidia On AI Factory Portfolio
HPE Private Cloud AI will support feature branch model updates from Nvidia AI Enterprise.

Hewlett Packard Enterprise has announced enhancements to the portfolio of Nvidia AI Computing by HPE solutions that support the AI lifecycle and meet the needs of enterprises, service providers, sovereigns and research & discovery organisations.
These updates deepen integrations with Nvidia AI Enterprise, expanding support for HPE Private Cloud AI with accelerated compute, launching HPE Alletra Storage MP X10000 software development kit for Nvidia AI Data Platform. HPE is also releasing compute and software offerings with Nvidia RTX PRO 6000 Blackwell Server Edition GPU and Nvidia Enterprise AI Factory validated design.
HPE Private Cloud AI Adds Feature Branch Support
HPE Private Cloud AI, a cloud-based AI factory co-developed with Nvidia, includes a dedicated developer solution for unified AI strategies. To further aid AI developers, Private Cloud AI will support feature branch model updates from Nvidia AI Enterprise, which include AI frameworks, NIM microservices for pre-trained models, and SDKs.
Feature branch model support will allow developers to test and validate software features and optimisations for AI workloads. Private Cloud AI will enable businesses to build developer systems and scale to production-ready agentic and generative AI applications.
HPE's Newest Storage Solution Supports Nvidia AI Data Platform
HPE Alletra Storage MP X10000 will introduce an SDK, which works with the Nvidia AI Data Platform reference design. Connecting HPE's newest data platform with Nvidia's customisable reference design will help enterprises enable agentic AI.
The new X10000 SDK enables the integration of context-rich, AI-ready data directly into the Nvidia AI ecosystem. This empowers enterprises to streamline unstructured data pipelines for ingestion, inference, training and continuous learning across Nvidia infrastructure. Primary benefits of the SDK integration include:
Data value through flexible inline data processing, vector indexing, metadata enrichment and data management.
Efficiency with remote direct memory access transfers between GPU memory, system memory and the X10000.
Right-sizing deployments with modular, composable building blocks of the X10000, enabling customers to scale capacity and performance independently to align with workload requirements.
"By co-engineering cutting-edge AI technologies elevated by HPE's robust solutions, we are empowering businesses to harness the full potential of these advancements throughout their organisation, no matter where they are on their AI journey," Antonio Neri, chief executive officer of HPE, said.
"Enterprises can build the most advanced Nvidia AI factories with HPE systems to ready their IT infrastructure for the era of generative and agentic AI," said Jensen Huang, CEO of Nvidia.