Dell And Nvidia Partner To Create Generative AI Solutions For Businesses
Summary:
- Dell Technologies and Nvidia have put together an offering called Project Helix that’s specifically designed to make the process of getting started with generative AI much easier.
- Project Helix is focused on creating full stack on-premises based generative AI solutions that allow companies to either build new or customize existing generative AI foundation models utilizing their own data.
- As the work is being done internally, Project Helix can help manage the IP leakage issues that many companies are concerned about.
As recent announcements demonstrate, the world is awash with new offerings targeted at bringing generative AI capabilities to businesses. From IBM (IBM) to Google (GOOG, GOOGL), Salesforce (CRM), Microsoft (MSFT), Amazon (AMZN) and Meta (META), every tech company, it seems, is trying to take advantage of the excitement around this transformational new technology.
And with good reason, because it’s also become increasingly clear that most organizations are eager to embrace it. Businesses are quickly identifying the potential productivity enhancements, efficiencies, and other benefits it can enable. One big problem, however, is that most companies aren’t entirely sure how they can start to leverage generative AI. People with deep knowledge of how the technology works and how it can be implemented are still very few and far between (not to mention, very expensive).
Recognizing this disconnect, Dell Technologies (NYSE:DELL) and Nvidia (NASDAQ:NVDA) have put together an offering called Project Helix that’s specifically designed to make the process of getting started with generative AI much easier. Project Helix is focused on creating full stack on-premises based generative AI solutions that allow companies to either build new or customize existing generative AI foundation models utilizing their own data.
One of the problems that quickly popped up in businesses that had started to use generative AI services is the leakage of internal IP. In fact, several companies – including Samsung and, most recently, Apple – have implemented policies that prevent their employees from using things like ChatGPT for work purposes because of fears related to this issue.
Part of the reason for this concern is that virtually all the early instances of generative AI could only run in huge cloud-based datacenters, and many of them collected the data that were entered into their prompt inputs. In the blindingly fast evolution of the foundation models that underly generative AI applications, however, a number of these concerns have already been addressed. One of the biggest is that there is now a huge range of open-source models from marketplaces like Hugging Face. Many of these open-source models can run very efficiently with more reasonable computing requirements, such as those in an appropriately equipped on-prem datacenter. On top of that, we’ve started to see some of the big tech companies start to shift the rules about where their models can be run. They’re also creating smaller versions of their models that are optimized to run on-site.
Additionally, we’ve seen several companies, including Nvidia, start to offer models that are specifically designed for enterprise applications. The Nvidia development is interesting on several levels. First, of course, is the fact that while the company is certainly strongly associated with generative AI, it has been exclusively because of its hardware. The company’s GPU chips power a large majority of the current generative AI applications and services in the cloud. At the company’s last GTC conference in March, however, they surprised most everyone by unveiling an entire range of generative AI-related software. In particular, they unveiled industry-specific software foundation models and enterprise-focused development tools, including its NeMo large language model (LLM) framework and NeMo Guardrails for filtering out unwanted topics. One thing that wasn’t surprising is that these models were optimized to run on Nvidia hardware.
With Project Helix, what Dell Technologies and Nvidia have done is put together a range of Dell PowerEdge server systems that include Nvidia H100 GPUs and Nvidia’s line of Bluefield DPUs (data processing units, used for the high-speed interconnects between servers that AI workloads demand) and bundled them with Nvidia’s Enterprise AI software. In addition, Dell offers several different storage options from its PowerScale and ECS Enterprise Object Storage lines that are optimized for these type of AI workloads. The result is a full “solution” that lets companies get started with building or customizing generative AI models. Potential customers can either use one of the Nvidia foundation model options or, if they prefer, select an open-source model from Hugging Face (or a solution from another tech provider) and start the process.
The bundled Nvidia software includes the ability to import an organization’s existing corpus of data, ranging from documents, customer service chats, social media posts, and much more, and then use that to either train a brand-new model or customize an existing one. Once the training process is complete, the tools necessary to run inferences and create new applications that leverage the newly trained model are included as well. Part of the bundle from Dell also includes a blueprint for helping companies walk through the process of creating/customizing these models and building these tools, as well as a range of technical support services.
Best of all, because this work is being done internally, Project Helix can help manage the IP leakage issues that many companies – even the ones that have started working with the generative AI tools – are concerned about.
Another important benefit of Project Helix is that it lets companies start to leverage generative AI in a more unique and personalized way. While the general-purpose tools currently available can definitely help for certain types of applications and environments, most companies recognize that the real competitive benefit of generative AI lies in customization. As a result, there’s a great deal of interest in incorporating a company’s own data into these tools, but again, there’s a lot of confusion on how exactly to do that.
Putting together an “easy kit” for generative AI doesn’t mean many organizations won’t face challenges in leveraging their data and the technology to create the solutions they need. Let’s not forget that concepts behind generative AI are still very new, and it’s an extremely complex technology. Nevertheless, by bundling the necessary hardware and software that’s been pretested to work together, along with information on how to work through the process, Project Helix looks to be an attractive option for organizations that are eager – or feel competitively compelled – to dive into this exciting new world.
Disclaimer: Some of the author’s clients are vendors in the tech industry.
Disclosure: None.
Source: Author
Editor’s Note: The summary bullets for this article were chosen by Seeking Alpha editors.