Nvidia Enhances GenAI Offerings For Enterprise
Summary:
- Vendors who are building solutions designed to help businesses enable the benefits that GenAI can offer are recognizing the challenges and increasing complexity of these efforts.
- Nvidia made several GenAI-focused announcements at the SIGGRAPH Expo trade show, including a new partnership with open-source AI model provider Hugging Face, a new tool for creating custom models across multiple environments, and some new enhancements to its suite of enterprise AI offerings.
- While each tackles a different aspect of the solution, collectively, they’re all geared towards making the process of working with GenAI easier.
There’s little doubt that companies are eager to bring the capabilities of generative AI (GenAI) into their organizations, but it’s also becoming increasingly clear that the ability to do so is proving to be more challenging than many realized. In fact, a recent research study on GenAI use in US businesses by TECHnalysis Research (see “A New Beginning: Generative AI Use in the Enterprise”) highlights this dichotomy in a stark way.
Vendors who are building solutions designed to help businesses enable the impressive benefits that GenAI can offer are recognizing these challenges. The most recent example is Nvidia (NASDAQ:NVDA). The company made several GenAI-focused announcements at the SIGGRAPH Expo trade show, including a new partnership with open-source AI model provider Hugging Face, a new tool for creating custom models across multiple environments, and some new enhancements to its suite of enterprise AI offerings. While each tackles a different aspect of the solution, collectively, they’re all geared towards making the process of working with GenAI easier.
In the case of the Hugging Face partnership, Nvidia is focused on customizing and deploying foundation models in a more straightforward manner. Hugging Face has quickly established itself as the preeminent marketplace for open-source versions of these models, and many companies have started looking for offerings that they believe will be well-suited to their specific needs. Simply finding the right models is only the first step in the journey, however, as companies typically want to customize these GenAI models with their own data before using them within their organizations.
In order to do that, each company needs to follow a process that allows it to safely and securely move and/or connect data with those models and then enable the appropriate computing infrastructure to further train the model with its data. That’s where the new partnership kicks in. Nvidia is building a direct connection from the models on Hugging Face to a DGX Cloud service powered by Nvidia GPUs where that training can be done. Called Training Cluster as a Service, this fee-based offering – expected to officially launch later this fall – will streamline the process of configuring all the various software components necessary to make the training process work and give organizations an easier way to get the critical model customization efforts done. Each DGX Cloud instance includes eight of the company’s H100 or A100 80 Tensor Core GPU cards to do the training work.
Of course, foundation model development, as well as applications and services that leverage those models, is an ongoing process. In light of that fact, Nvidia also debuted a new tool that’s designed to let AI-focused developers work on those models in various environments. Called Nvidia AI Workbench, the new software application and workspace lets developers build and test GenAI-focused efforts on appropriately equipped Nvidia GPU-powered PCs and workstations. Their work can then be easily transferred and scaled to various public and private cloud environments (including Amazon’s (AMZN) AWS, Microsoft’s (MSFT) Azure, and Google’s (GOOG, GOOGL) GCP), as well as Nvidia’s own DGX Cloud, as their needs demand.
It turns out that without something like AI Workbench, it’s very difficult to do these kinds of migrations because of the need to separately configure each environment for the model. With Nvidia’s new free AI Workbench tool, however, the process becomes simple not only for individual programmers but teams of developers working across different geographies or in different parts of the organization. AI Workbench takes care of finding and linking all the various open-source libraries and frameworks necessary to make the transfer seamless across the different environments. Once again, the goal is to make the process of building and deploying these GenAI models easier than it has been in the past.
Forthcoming RTX-equipped systems from companies like Dell (DELL), HP (HPQ), Lenovo (OTCPK:LNVGY), HPE and Super Micro (SMCI) are expected to be able to support Nvidia AI Workbench in both Windows and Linux environments.
The final piece of the puzzle is the effort to build on what Nvidia previously announced last spring at its GTC event. Enterprise AI 4.0 incorporates additional enhancements to the company’s line of Nemo foundation models that Nvidia claims now make them a “cloud-native framework” for building custom large language models (LLMs). In addition, the company unveiled the Nvidia Triton Management Service for automating the deployment of multiple Triton Inference Servers, and Nvidia Base Command Manager Essentials, which is intended to manage AI computing clusters across multiple environments. The latter two capabilities, in particular, reflect Nvidia’s growing reach into the overall automation and management of AI workloads in different environments. The Triton Management Service is specifically for efficient orchestration of Nvidia-accelerated Kubernetes-based AI inferencing workloads across containers, while the Base Command Manager uplevels things to entire computing clusters in larger hybrid and multi-cloud environments.
What’s clear from all these announcements is that Nvidia – like other vendors – is recognizing the increasing complexity of the efforts required to make GenAI an important part of any business’ IT efforts. These new tools look to be an important step towards simplifying at least some of the processes involved, but as the GenAI study referenced at the beginning makes clear, it’s going to be a while before many organizations are comfortable doing this kind of work on their own.
This whole field is unexplored territory for most organizations, and there needs to be a lot more education to bring all the interested parties up to speed. At the same time, there’s a very real sense that organizations need to jump into these GenAI initiatives quickly, lest they fall behind their competition. The resulting disconnect is going to be difficult to navigate for a while, but efforts like what Nvidia just unveiled are clearly steps in the right direction.
Disclaimer: Some of the author’s clients are vendors in the tech industry.
Disclosure: None.
Source: Author
Editor’s Note: The summary bullets for this article were chosen by Seeking Alpha editors.