Experts have often raised concerns about voice-activated home assistants like Amazon’s Alexa, and just how much insight into your life they have, but now researchers at Checkmarx have proven just how vulnerable such services can make your information—by hacking into one of them.
Alexa “wakes up” when she hears her name, then records the user’s interaction with her. The recording is stopped when the interaction is over, but privacy advocates have pointed out that such a service, if hacked, could prove a real threat to user privacy.
And now researchers have now proven that this is absolutely possible.
When a user talks to Alexa, she usually gives a response that doesn’t require a second statement from the user. However, in some instances clarification is needed, and then Alexa reopens the recording capability to hear the user’s second statement. Researchers were able to stop the system from shutting down the recording process after the interaction with the user was over—by using a relatively simple calculator app to trick Alexa into keeping it open for a secon response, even when none was needed.
Normally, Alexa only “hears” interactions of a certain length, with certain expected words. However, researchers forced Alexa to record sentences of any length and with any words allowed, meaning that once Alexa interacted with a user, instead of shutting down, she would then record everything in the surrounding area. This, of course, poses a huge threat to the security of user information.
Thankfully, Checkmarx notified Amazon of this vulnerability and they have since taken steps to close down this security loophole. Reseachers also noted that Alexa’s blue light, which signifies that she is still listening, could not be stopped, meaning that users could potentially see the blue light and notice that something is wrong.
An Amazon spokesperson clarified that the company was working to make Alexa more secure since this vulnerability was found:
Customer trust is important to us and we take security and privacy seriously. We have put mitigations in place for detecting this type of skill behavior reported by Checkmarx.