Alexa and Google Assistant hacks let eavesdropping smart speakers do voice phishing

Smart speakers like Amazon Echo or Google Home can certainly be useful tools, but along with that usefulness also comes a number of security concerns. These concerns have been well stated since these smart speakers first hit the market, and today, a team of security researchers are sounding the alarm on exploits affecting Google Home and Amazon Echo devices that developers can use to either eavesdrop on users or phish for personal information.

The Berlin-based Security Research Labs detailed both exploits in a lengthy report published to its website. It's calling this pair of exploits the "Smart Spies" hacks, and it developed a collection of applications to demonstrate not only how the attacks are carried out, but also the fact that skills or apps carrying these exploits can get around Amazon and Google's approval processes.

The first Smart Spies hack involves phishing for a user's password using bogus update alerts. As Security Research Labs explains it, this exploit relies on the fact that once a skill or app is approved by either Google or Amazon, changing its functionality doesn't trigger a second review. With that in mind, Security Research Labs constructed apps that use the word "start" to trigger functions.

Once the innocuous app was approved by Amazon and Google, Security Research Labs went back in and changed the welcome message for the app to a fake error message – "This skill is currently not available in your country" – to make users believe that the app didn't start and is no longer listening. From there, Security Research Labs made the app "say" the character sequence for "�." Since the sequence is unpronounceable, the speaker goes silent for a period of time, reinforcing the notion that neither the app nor the speaker are currently active.

After a reasonable period of silence has passed, the app then plays a fake update alert in a voice that's similar to the one used by Alexa or Google Assistant. In the videos you see above, this alert makes it seem like there's an update available for the smart speaker and that users must prompt the speaker to install it by saying "Start update" followed by their password. Since "start" is a trigger word in this case, the attacker captures the user's password while the victim has no clue that they just gave login credentials to a malicious third-party.

Many of us know that Amazon and Google aren't going to ask for your password via Alexa or the Assistant, but this hack plays on the ignorance of users who don't know that. Needless to say, if your smart speaker ever asks for personal information like your passwords or credit card numbers, don't give that information up.

The second Smart Spies hack is even more worrying, as it can allow your smart speaker to continue eavesdropping on conversations when you think you've stopped an app. The process of carrying out this attack is a bit different for Echo devices and Google Home devices, but both attacks rely on changing the way an app functions after it's been approved by either Amazon or Google.

In both cases, the attack is carried out by leading the user to believe that they've stopped an app while in reality it's still running silently. Once again, the character sequence "�," is used, and on Echo devices, it's at this point that the app begins listening for common trigger words such as "I," though these trigger words can be anything defined by the attacker. The app listens for a few seconds after the user gives the "stop" command, and if the user speaks a phrase beginning with that trigger word, the contents of that conversation are sent off to the attacker's servers.

On Google Home, this eavesdropping exploit has the potential to run indefinitely, and not only will it eavesdrop on users continuously, but it will also send any other "OK Google" commands the user tries to carry out to the attackers. This means that the exploit could potentially be used to stage man-in-the-middle attacks when smart speaker owners attempt to use another app.

Security Research Labs reported these exploits to both Google and Amazon before publishing its report, in which it says that both companies need to implement better protection for end-users. Central to SRLabs' criticism is the flawed approval process employed by Amazon and Google, though the researchers also believe that Google and Amazon should take action against apps that use unpronounceable characters or silent SSML messages.

SRLabs believes that Amazon and Google should ban output text that includes "password" as well, since there's no real reason for apps to ask for those in the first place. In the end, though, SRLabs also says that users should approach new smart speaker skills and apps with a degree of caution, as this report makes it clear creative attackers can do a lot while evading Amazon and Google's attention.