VMware, Nvidia Collaborate To Help Enterprises Unlock Generative AI’s Potential

Advertisement
Read Time: 2 mins
(Source: Freepik/vecstock)

Technology companies VMware Inc. and Nvidia Corp. have announced the expansion of their partnership to enable enterprises that run on VMware's cloud infrastructure to leverage generative artificial intelligence.

VMware Private AI Foundation with Nvidia will allow enterprises to customise models and run generative AI applications, including chatbots, assistants, search and summarisation. The platform will feature generative AI software and accelerated computing from Nvidia, built on VMware Cloud Foundation and optimised for AI, VMware said.

Advertisement

“Generative AI and multi-cloud are the perfect match. Customer data is everywhere—in their data centres, at the edge, and in their clouds. Together, with Nvidia, we will empower enterprises to run their generative AI workloads adjacent to their data with confidence, while addressing their corporate data privacy, security, and control concerns,” said Raghu Raghuram, CEO of VMware.

Full-Stack Computing To Unlock Generative AI's Potential

To achieve business benefits faster, enterprises are seeking to streamline development, testing, and deployment of generative AI applications. McKinsey estimates that generative AI could add up to $4.4 trillion annually to the global economy.

Advertisement

VMware Private AI Foundation with Nvidia will enable enterprises to harness this capability, customising large language models, producing more secure and private models for their internal usage, offering generative AI as a service to their users, and securely running inference workloads at scale, VMware said.

The platform will be built on VMware Cloud Foundation and Nvidia AI Enterprise software and is expected to:

  • Enable enterprises to run AI services adjacent to their data location with an architecture that preserves data privacy and enables secure access.

  • Allow enterprises to choose where to build and run their models.

  • Deliver performance equal to and even exceeding bare metal in some use cases with Nvidia-accelerated infrastructure.

  • Enable AI workloads to scale up to 16 vGPUs/GPUs to speed generative AI model fine-tuning and deployment.

  • Lower cost and create a pooled resource environment that can be shared efficiently across teams.

  • Accelerate storage, allowing for direct I/O transfer from storage to GPUs without CPU involvement.

  • Accelerate networking between multi-GPU models without bottlenecks.

  • Enable fast prototyping capabilities.

According to VMware, the platform will feature an end-to-end, cloud-native framework that allows enterprises to build, customise and deploy generative AI models virtually anywhere. It will also enable enterprises to pull in their own data to build and run custom generative AI models on VMware's hybrid cloud infrastructure.

Advertisement

Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.

Loading...