HPE Updates Native Portfolio To Advance Generative AI, Machine & Deep-Learning Applications

Portfolio includes full-stack gen AI solutions, a preview of HPE Machine Learning Inference Software and enterprise RAG reference architecture.

Artificial intelligence (Image by FrFeepik)

Hewlett Packard Enterprise has announced updates to its artificial intelligence-native portfolios to advance the operationalisation of generative AI, deep-learning and machine-learning applications.

According to the company, the updates include availability of two HPE and Nvidia co-engineered full-stack gen AI solutions, a preview of HPE machine learning inference software, an enterprise retrieval-augmented generation reference architecture, along with support to develop future products based on the new Nvidia Blackwell platform.

HPE and Nvidia "will continue to deliver co-designed AI software and hardware solutions that help our customers accelerate the development and deployment of gen AI from concept into production", Antonio Neri, chief executive officer of HPE, said.

Supercomputing-Enabled Gen AI Training

HPE's supercomputing solution for gen AI is aimed at organisations seeking a preconfigured and pretested full-stack solution for the development and training of large AI models. The solution is powered by Nvidia and can support up to 168 Nvidia GH200 Grace Hopper Superchips, the company said.

The solution can enable large enterprises, research institutions and government entities to streamline the model development process with an AI/ML software stack. According to HPE, it can help accelerate gen AI and deep-learning projects, including large language models, recommender systems and vector databases.

Gen AI Tuning, Inference

HPE said that its computing solution for generative AI is now available for enterprises. Co-engineered with Nvidia, the preconfigured fine-tuning and inference solution is designed to reduce ramp-up time and costs by offering compute, storage, software, networking and consulting services for the production of gen AI applications.

According to HPE, the AI-native full-stack solution can improve the speed, scale and control for tailoring foundational models using private data and help deploy gen AI applications within a hybrid cloud model.

From Prototype To Production

HPE said it is collaborating with Nvidia on software solutions that will help enterprises turn AI and ML proofs-of-concept into production applications. HPE Machine Learning Inference Software will allow enterprises to deploy ML models at scale.

For building and deploying gen AI applications that feature private data, HPE has developed a reference architecture for enterprise RAG, based on Nvidia’s NeMo Retriever microservice architecture. According to the company, the reference architecture will offer businesses a blueprint to create customised chatbots, generators or co-pilots. 

HPE also said it will develop future products based on the Nvidia Blackwell platform, which incorporates a second-generation Transformer Engine to accelerate gen AI workloads.

Also Read: Nvidia Unveils Successor To Its All-Conquering AI Processor

Watch LIVE TV, Get Stock Market Updates, Top Business, IPO and Latest News on NDTV Profit.
GET REGULAR UPDATES