alexa-speaker.jpg

Hackers can abuse Amazon Alexa and Google Home smart assistants to eavesdrop on user conversations without users' knowledge, or trick users into handing over sensitive information.

The attacks aren't technically new. Security researchers have previously found similar phishing and eavesdropping vectors impacting Amazon Alexa in April 2018[1]; Alexa and Google Home devices in May 2018[2]; and again Alexa devices in August 2018[3].

Both Amazon and Google have deployed countermeasures every time, yet newer ways to exploit smart assistants have continued to surface.

The latest ones were disclosed today, after being identified earlier this year by Luise Frerichs and Fabian Bräunlein, two security researchers at Security Research Labs (SRLabs), who shared their findings[4] with ZDNet last week.

Both the phishing and eavesdropping vectors are exploitable via the backend that Amazon and Google provide to developers of Alexa or Google Home custom apps.

These backends provide access to functions that developers can use to customize the commands to which a smart assistant responds, and the way the assistant replies.

The SRLabs team discovered that by adding the "�. " (U+D801, dot, space) character sequence to various locations inside the backend of a normal Alexa/Google Home app, they could induce long periods of silence during which the assistant remains active.

Phishing personal data

The two demos embedded below show how an attacker could carry out a phishing attack on both devices.

The idea is to tell the user that an app has failed, insert the "�. " to induce a long pause, and then prompt the user with the phishing message after a few minutes, tricking the target into believing the phishing message has nothing to do with the previous app with which they just interacted.

For example, in the videos

Read more from our friends at ZDNet