Tesla (TSLA) CEO Elon Musk suggested last week at the company’s annual meeting that customers could be paid $100 to $200 a month to allow Tesla (TSLA) to do AI inference workloads when they are not using their vehicle.
Morgan Stanley analyst Adam Jonas crunched the numbers on the potential implications of the shared AI workload scenario.
“There are more than 300 million light vehicles on the road in the United States and over 1.2 billion light vehicles in the global car ‘parc.’ If one were to assume 100% of these vehicles had 1 NVIDIA Blackwell GPU equivalent of compute power (currently around 9,000 TOPS) of inference compute. That’s over 300 million Blackwells in the U.S. alone. 1.2bn globally – a figure we estimate could reach 2bn over the next 15 years,” wrote Jonas.
He then highlighted the one billion humanoid robots and a few billion eVTOLs/drones and other robotic form factors (construction bots, agribots, manufacturing/industrial bots, surgical robots, etc.) that could be added to the shared AI inference compute. Jonas thinks it is not a big stretch of the imagination for there to be tens of billions of Blackwell-equivalent inference computers at the edge, complete with cooling/thermal control and ‘in-situ’ data capture. The setup of swarming, distributed, low-latency intelligence is seen as having plenty of spare capacity to help take the load off the nearest data center.
On Seeking Alpha, analyst Steven Fiorillo said the scenario could give Tesla (TSLA) the largest AI inference compute capacity in the world and potentially generate 100 gigawatts of distributed inference. “If TSLA pulls this off, it would give them an unmatched edge against the competition and allow them to transcend data centers for their workloads,” he added.