Amazon is switching on local voice recognition processing, promising users of some of its latest Echo smart speakers and smart displays that they can have their Alexa commands avoid the cloud completely. The new feature, announced at Amazon’s big hardware and services event today, taps into the retail giant’s homegrown AZ1 Neural Edge chipset.
That Neural Edge processor was one of Amazon’s first developments, as it followed Google, Apple, and others in creating its own custom silicon. While the AZ1 may not have been able to power the whole Alexa experience, it was focused instead on specific voice recognition features.
Now, we’re seeing how that pays off. In the US, select Echo and Echo Show devices will be able to use the Amazon AZ1 to recognize Alexa commands completely on-device. Rather than having to pass the audio to the cloud for processing, it’ll be carried out entirely on the Echo itself.
Initially, as well as being in the US, you’ll need to have an Echo Show 10 – the model with the rotating display that can follow users around the room – or the latest-generation 4th Gen Echo. Both were launched last year. According to Amazon’s Dave Limp, VP of Devices and Services, it’ll be expanded to other devices in due course.
Owners will need to opt into the local processing in order for it to activate. If you enable that setting, the Echo will also automatically delete the audio recording after it’s been processed.
“Privacy is an opportunity for invention,” Limp argues, and Amazon is investing more into more powerful Neural Edge chipsets. There’s a new AZ2, also announced today, which will be part of the Echo Show 15.