“I feel haunted.”
This is one of several social media reactions to Amazon.com Inc’s Alexa digital assistant imitating a grandma reciting a passage from “The Wonderful Wizard of Oz.”
During a company presentation on Wednesday, Alexa chief scientist Rohit Prasad attempted to demonstrate the digital assistant’s humanlike demeanor, Bloomberg reported.
Prasad stated that he was surprised by the companionable relationship users developed with Alexa and wanted to investigate this further. Human characteristics such as “empathy and affect” are essential for establishing trust with others, he said.
In the midst of the current epidemic, when so many of us have lost someone we care about, while AI cannot alleviate the sorrow of loss, it can surely make their memories linger, he added.
Amazon is marketing the service as a tool for digitally reviving the dead, according to the presentation. Prasad emphasized in a subsequent interview on the sidelines of Amazon’s re: MARS technology conference in Las Vegas that the service was not primarily meant to replicate the voice of deceased people.
“It’s not about folks who aren’t with you anymore,” he clarified. “But this is about your grandmother; if you want your child to hear grandma’s voice, you may do so even if she is not present. That’s something I’d want.”
As the presentation circulated around the internet, the creep factor dominated the conversation. More significant worries, though, have emerged. One potential was that the technology might be used to produce deep fakes, which would involve utilizing a valid recording to mimic individuals speaking something they hadn’t actually spoken.
More: The Full List of Alexa Commands Is Available Here
Concerns were made by Siwei Lyu, a computer science and engineering professor at the University of Buffalo whose research specializes on deep fakes and digital media forensics.
“There are undeniable benefits to Amazon’s voice conversion technology,” he says, “but we should be wary of potential misuses.” “For example, a predator might appear over the phone as a family member or a friend to seduce unwary victims, and a faked audio tape of a high-level CEO speaking about her company’s financial status could send the stock market crazy.”
While Amazon has not given a release date for the new Alexa capability, comparable technologies might make such mischief much simpler in the future. According to Prasad, Amazon had learned to replicate a voice based on less than a minute of that person’s speech. Previously, this needed hours spent in a studio.