
Michael M. Santiago/Getty Images News
Shares of CoreWeave (NASDAQ:CRWV) rose about 3% on Wednesday after the company, in collaboration with Nvidia (NASDAQ:NVDA) and IBM (NYSE:IBM), delivered the largest-ever MLPerf Training v5.0 submission results for Nvidia’s GB200 Grace Blackwell chips.
CoreWeave said 2,496 Blackwell graphics processing units, or GPUs, were running on CoreWeave’s AI-optimized cloud platform.
The submission is the largest GB200 NVL72 cluster ever benchmarked under MLPerf, 34times larger than the only other submission from a cloud provider highlighting the large scale of CoreWeave’s cloud platform for demanding AI workloads, according to CoreWeave.
CoreWeave noted that the submission achieved a breakthrough result on the largest and most complex foundational model in the benchmarking suite, Llama 3.1 405B, completing the run in 27.3 minutes.
CoreWeave’s GB200 cluster achieved more than twice as faster training performance, compared to submissions from other participants across similar cluster sizes, according to the company.
“These MLPerf results reinforce our leadership in supporting today’s most demanding AI workloads,” said Peter Salanki, co-founder and chief technology officer at CoreWeave.
These results matter because they translate directly to faster model development cycles and an optimized total cost of ownership, according to CoreWeave.
For CoreWeave’s customers, this could cut training time in half, scaling workloads efficiently, and training or deploying models more cost-effectively by using latest cloud technologies, months before their competitors, CoreWeave noted.
In April, CoreWeave — which went public in March — brought thousands of Grace Blackwell GPUs online, becoming the first cloud provider to make the new GPUs generally available at scale.