Made By Google Event Highlights AI Software Advances
Summary:
- At the Made by Google event, the company managed to establish differentiation in the smartphone market by focusing on AI-powered software capabilities found in their latest Pixel 9 line of phones.
- The deep integration of their Gemini AI models, in particular, gave Google a way to demonstrate a variety of unique and compelling experiences that are now available on the range of new Pixel 9 phones.
- The company managed to bring some differentiation on the hardware side of things as well, though it was clearly in the software where Google made its most compelling arguments for switching to a new Pixel 9.
Creating differentiation in a market like smartphones that’s filled with countless me-too products is not an easy thing to do. To their credit, Google (NASDAQ:GOOG, NASDAQ:GOOGL) managed to pull off that challenging task at the Made by Google event by focusing on AI-powered software capabilities found in their latest Pixel 9 line of phones.
The deep integration of their Gemini AI models, in particular, gave the company a way to demonstrate a variety of unique – and compelling – experiences that are now available on the full range of new Pixel 9 phones. From the surprisingly fluid conversational AI assistant feature of Gemini Live, through the transcription and summarization of voice calls in Call Notes and image creation capabilities of Pixel Studio, Google highlighted several practical examples of AI-powered features that regular people are actually going to want to use.
On the hardware side of things, Google managed to bring some differentiation as well – at least for their latest foldable. The 9, 9 Pro and 9 Pro XL all feature the traditional flat slab of glass smartphone shape (in 6.3” and 6.8” screen sizes) but offer a new camera design on the back that offers a modest bit of design change. On the updated 9 Pro Fold, however, the shorter, wider and thinner shape of the phone offers a clearly unique design versus foldable competitors like Samsung’s (OTCPK:SSNLF) Galaxy Z Fold. (Interestingly, the Pixel 9 Pro Fold’s design is actually thinner and taller than the original Pixel Fold.)
The aspect ratio that Google has chosen for the outside screen of the 9 Pro Fold makes it look and feel identical to a regular Pixel 9, but still gives you the advantage of a foldable screen that opens up to a massive 8”. For someone who has used foldable phones for several years, that similarity to a traditional phone size for the unfolded form factor is a much more important change than it may first appear and something I’m eager to try.
Inside the Pixel 9 line is Google’s own Tensor G4 SOC, the latest version of its mobile processor line. Built in conjunction with Samsung’s chip division, the Tensor G4 features standard ARM CPU and GPU cores but also incorporates Google’s proprietary TPU AI accelerator. The whole line of phones incorporates upgraded camera modules, with the 9 Pro and Pro XL, in particular, sporting new 50MP main, 48MP ultrawide and 42MP selfie sensors. The phones also offer higher amounts of standard memory with 16 GB the new default on all but the $799 Pixel 9 (which features 12 GB standard). This will be critical because AI models like Gemini Nano require extra memory to run at their best.
In truth, though, it isn’t in the hardware or the tech specs where Google made its most compelling arguments for switching to a new Pixel 9 – it was clearly in the software. In fact, the Gemini part of the story is so important that on Google’s own site they list the official product names of the phones as Pixel 9 (or 9 Pro, etc.) with Gemini. As Google announced at its I/O event this spring, Gemini is the replacement for Google Assistant, and it’s available for the Pixel 9 line now and will be coming to other Android phones (including Samsung, Motorola (MSI) and others) later this month. While Google is somewhat notorious for swapping names and changing critical functionality on their devices (or for their services) on a regular basis, it seems they’ve made this shift from Google Assistant to Gemini in a more comprehensive and thoughtful manner.
First, Gemini shows up in several different but related ways. For the kind of “smart assistant” features that integrate knowledge of an individual’s preferences, contacts, calendar, messages, etc., the “regular” version of Gemini provides GenAI-powered capabilities that leverage the Gemini Nano model on device. In addition, Gemini Nano powers the new Pixel Weather app and the new Pixel Screenshots app, which can be used to manually track and recall activities and information that appears on your phone’s screen. (In a way, Screenshots is like a manual version of Microsoft’s Recall function for Windows, although it doesn’t automatically take the screenshots like Recall does.)
Gemini Live provides a voice-based conversational assistant that you can have complete conversations with. In its first iteration, it functions completely in the cloud and requires a subscription to Gemini Advanced (the first year of which is included with all but the basic Pixel 9). In multiple demos of Gemini Live, I was very impressed at how quickly and intelligently the Assistant (which can use one of 10 different voices) can respond – it’s by far the closest thing to talking with an AI-powered digital persona I’ve ever seen (except in the movies, of course). Unfortunately, Gemini Live doesn’t run on device just yet, meaning it can’t get access to the kinds of personalized information that the “regular” Gemini models running on devices can, but combining these two kinds of assistant experiences is clearly where Google is headed. And even still, the kinds of things you can use Gemini Live for, such as brainstorming sessions, researching a topic, prepping for an interview and much more, look to be very useful on their own.
Yet another interesting implementation of Gemini Live came through Google’s new Pixel Buds 2, which were also introduced at the event. By simply tapping on one ear bud and saying “Let’s talk live”, you can initiate a conversation with the AI assistant – one that can go on for an hour or more if you so choose. What’s intriguing about using ear buds instead of the phone is that it will likely trigger different kinds of conversations because it feels more natural to engage in an audio-only conversation with earbuds than it does holding and talking into a phone screen.
In addition to the Gemini-powered features, Google also debuted other AI-enabled software that’s unique to the Pixel, including a photo Magic Editor that extends Google’s already impressive image editing capabilities to yet another reality-bending level. It really is getting harder to tell what’s real and what’s not when it comes to phone-based photography. For image generation, Google showed Pixel Studio, which leverages both an on-device diffusion model and a cloud-based version of the company’s Imagen 3 GenAI foundation model. Finally, the clever new Add Me feature lets you get a group shot by merging two different photos (one of you taking your subject(s) and one of them taking a photo of you in the same spot) using augmented reality to create a shot of everyone there – without having to ask a stranger to take it for you!
Ultimately, what was most impressive about Google’s launch event is that it managed to provide practical, real-world examples of AI-powered experiences that are likely to appeal to a broad range of people. At a time when some people have started to voice concerns about an AI hype cycle and the lack of capabilities that really matter, Google hammered home that the impact of GenAI is not only real, but compellingly so. Plus, they dropped hints of even more interesting capabilities yet to come. I have little doubt some speed bumps will accompany these launches – as they always seem to do with AI-related capabilities – but I’m now even more convinced that we’re just at the start of what promises to be a very exciting new era.
Disclaimer: Some of the author’s clients are vendors in the tech industry.
Disclosure: None.
Original Source: Author
Editor’s Note: The summary bullets for this article were chosen by Seeking Alpha editors.