The Pixel 4 is finally live, not that there were that many surprises left. Of course, reading about those leaked features and actually experiencing them are two very different things. Some of these features are, of course, heavily dependent on the hardware that the Pixel 4 brings to the table but one, in particular, specifically Live Caption, will be spreading to a wider audience, at least those that are using last year’s Pixel phones.
Live Caption is one of two new accessibility features that Google showcased at I/O 2019 a few months back. Both involve turning sounds into words for people with hearing difficulties. Both, of course, can also be useful even for regular smartphone users who can barely hear a thing over all the noise around them or need to hear something private without headphones.
That’s where the Live Caption feature comes into play. Unlike Live Transcribe, which transcribes ambient sounds picked up by the phone’s mic, Live Caption works on audio playing on the device itself, particularly video apps like YouTube. In fact, it is almost like YouTube’s captioning system, except more generalized for Android media apps.
In addition to offering a new way to experience a Pixel phone, Live Caption also demonstrates Google’s advancements in machine learning. In particular, it is showing off how such a complex feature can happen all locally on the device, without an Internet connection, and in real-time.
Live Caption does require a bit of processing power and a tiny bit of storage space, but it seems that the Pixel 4 won’t be the absolute minimum. Android Police reports that the Pixel 3 and even the Pixel 3a will get the feature by December. Curiously, despite being almost on the same caliber as the Pixel 3a, the Pixel 2 and Pixel OG are being left out.