There’s no denying that Apple was terribly late to the smart speaker market. Despite marketing it less as a smart speaker and more as a premium audio accessory with smarts, the HomePod has and always will be compared to the likes of the Amazon Echo and the Google Home. And while it might seem that Apple has put HomePod development on the back burner, a recently surfaced patent could give future generation smart speakers an edge over the competition when it comes to hand gestures, face recognition, and even simple visual feedback.
By their very nature, these smart speakers are all mostly driven by voice. You ask them things or to do things and they’ll answer back in kind. Some have added touch screens for added input but they’re pretty much the same to how we interact with tablets.
A recently published Apple patent suggests that the company is looking into other input methods that involve neither voice nor touch. Instead, you can wave your hands or clap or make any other gestures to tell the HomePod to do something. MacRumors theorizes that such an input system would take advantage of the technology Apple acquired from PrimeSense, the company behind the Microsoft Kinect and, later, the Apple TrueDeptth sensor.
That technology could also be used for facial recognition. And, no, it won’t read you facial expressions to know what to do. Instead, the HomePod could use Face ID and distance measurements to determine who is speaking to it and from where. This could be useful for multi-user setups, presuming Apple implements that as well.
It doesn’t seem like Apple is too eager to do an Amazon Echo Show or Google Home Hub speaker with a display. Instead, the patent points to LEDs woven into the fabric to give a retro-style dot-matrix visual feedback. Of course, being patents, there’s no assurance this would be an actual product, but one can hope it will.