Amazon plans to join the custom chip club, with future Alexa devices like the Echo smart speaker reportedly set to be powered by a homegrown AI processor. Efforts to design a special chipset that would give Alexa offline talents, among other improvements, are said to have been underway for the past few years. As well as further differentiating its assistant technology, it could also open Amazon up to a potentially lucrative new market.
Currently, the Amazon Echo range is powered by off-the-shelf chips. The Echo Show, for example, uses an Intel Atom processor, while the Echo Dot relies on a TI chipset. The actual heavy lifting for voice recognition, however, is handled remotely.
Say the “Alexa” trigger word, and while Echo speakers are listening out for that locally, anything you say subsequently is passed to Amazon’s servers. They’re responsible for figuring out what you’re actually requesting, with Alexa’s correct reaction then pushed back to the Echo in your home. It allows Amazon to continuously upgrade and improve what its assistant technology can do and how well it performs, but it also comes with some downsides.
Server-based processing introduces inevitable lag, given the audio has to transferred to the cloud, crunched there, and then Alexa’s response pushed to the speaker. Deprived of a WiFi connection, meanwhile, the Echo is effectively useless. Even basic tasks, like telling the time, are impossible unless the smart speaker is connected.
Amazon’s plan, according to The Information, is to bypass that limitation with some custom silicon of its own. The retail behemoth has reportedly been working on an in-house artificial intelligence chipset for some years now, building on its acquisition of chip design specialist Annapurna Labs in 2015. Although exactly what it’s using them for is unclear at this stage, there are a few clear advantages.
For a start, Alexa could gain more local functionality divorced from her reliance on the cloud. Asking an Echo more straightforward questions, like the current time or a conversion, could be recognized onboard the speaker itself and carried out without resorting to server-side processing. That would make Alexa’s responses snappier.
Better on-device AI could also improve Alexa’s ability to spot mentions of her trigger words, and conversely get better at avoiding false activations. It might even give Amazon a new privacy angle to play, since one of the reasons many would-be smart speaker buyers remain skeptical is the amount of data being passed to a company’s cloud servers. Certainly, a locally-hosted Alexa alone would only be able to do a fraction of what the assistant can achieve now, but it would at least mean that essentials like controlling connected lights and such would still work even if the home’s internet connection went down.
If true, it puts Amazon in a growing club of companies looking to custom chipset design. Apple has long been creating the chips it uses for iPhones and iPads, and more recently has begun developing its own co-processors for macOS machines like the iMac Pro. Google, meanwhile, has its own processors for things like computational photography: recently, the company enabled the Pixel 2’s Pixel Visual Core for third-party apps wanting to tap into improved photo talents.
Given Amazon’s goals of driving down Echo pricing, meanwhile, figuring out the cheapest possible chip tailor-made for Alexa to run successfully could also held considerably with economies of scale. After all, even if it could save a couple of cents on each Echo unit, that would add up considerably. It might even open up a new sales channel for third-party manufacturers wanting to easily integrate Alexa support, whether that was in smart speakers of their own or other appliances and devices, like kitchen tech or even car dashboards.