Apple (AAPL) has released a new open-source AI model called SHARP which can turn a single 2-dimensional image into a 3-dimensional scene.
The U.S. tech giant has published a study titled Sharp Monocular View Synthesis in Less Than a Second, in which it shows how it trained a model to reconstruct a 3D scene from a single 2D image in less than a second on a standard Graphics Processing Unit, or GPU.
“We present SHARP, an approach to photorealistic view synthesis from a single image. Given a single photograph, SHARP regresses the parameters of a 3D Gaussian representation of the depicted scene. This is done in less than a second on a standard GPU via a single feedforward pass through a neural network. The 3D Gaussian representation produced by SHARP can then be rendered in real time, yielding high-resolution photorealistic images for nearby views,” said the researchers in the paper.
Gaussian splatting is a technique for representing real-life scenes in 3D.
The researchers noted that the representation is metric, with absolute scale, supporting metric camera movements. They added that experimental results show that SHARP delivers zero-shot generalization across datasets.
Apple noted that SHARP synthesizes a photorealistic 3D representation from a single photograph in less than a second. The synthesized representation supports high-resolution rendering of nearby views, with sharp details and fine structures, at more than 100 frames per second on a standard GPU.