Amid concerns from some investors, including Michael Burry, OpenAI’s (OPENAI) U.S. auditor is one of the Big 4 firms, the Financial Times reported.
The ChatGPT maker, which has come under scrutiny in recent weeks for its intense spending plans, is audited by Deloitte, the news outlet added, citing multiple sources familiar with the matter.
OpenAI and Deloitte did not immediately respond to a request for comment from Seeking Alpha.
On Wednesday, Burry, who has taken to the social media X platform to express his concerns about the artificial intelligence spending boom, openly pondered which firm was auditing OpenAI’s books. “OpenAI is the linchpin here,” Burry wrote. “Can anyone name their auditor?”
Burry has also come out and said that the useful life of processors and data centers during the AI spending boom are being inflated to help company earnings.
“The idea of a useful life for depreciation being longer because chips from more than 3-4 years ago are fully booked confuses physical utilization with value creation,” Burry wrote in another post on X. “Just because something is used does not mean it is profitable. GAAP refers to economic benefits. Airlines keep old planes around for overflow during Thanksgiving or Christmas, but are only marginally profitable on the planes all the same, and not worth much at all. A100s take 2-3x more power per FLOP (compute unit) so cost 2-3x more in electricity alone than H100s. And Nvidia claims H100 is 25x less energy efficient than Blackwell for inference. If that is the direction you are going, chances are you have to be doing it, and it is not pleasant.”
Nvidia (NVDA) — which recently announced a deal to invest $100B into OpenAI — fired back at the claim that its chips from a few years ago are being used to impact the company’s accounting.
“The long useful life of NVIDIA’s CUDA GPUs is a significant [total cost of ownership] advantage over accelerators,” Nvidia CFO Colette Kress said on the company’s earnings call.
“Most accelerators without CUDA and NVIDIA’s time-tested and versatile architecture became obsolete within a few years as model technologies evolve,” Kress added. “Thanks to CUDA, the A100 GPUs we shipped 6 years ago are still running at full utilization today, powered by [a] vastly improved software stack.”