Amazon workers are listening to what you say to Alexa

Bloomberg

Tens of millions of people use smart speakers and their voice software to play games, find music or trawl for trivia. Millions more are reluctant to invite the devices and their powerful microphones into their homes out of concern that someone might be listening.
Sometimes, someone is.
Amazon.com Inc employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices.
The recordings are transcribed, annotated and then fed back into the software as part of an effort to eliminate gaps in Alexa’s understanding of human speech and help it better respond to commands.
The Alexa voice review process, described by seven people who have worked on the program, highlights the often-overlooked human role in training software algorithms. In marketing materials Amazon says Alexa “lives in the cloud and is always getting smarter.” But like many software tools built to learn from experience, humans are doing some of the teaching.
The team comprises a mix of contractors and full-time Amazon employees who work in outposts from Boston to Costa Rica, India and Romania, according to the people, who signed nondisclosure agreements barring them from speaking publicly about the programme. They work nine hours a day, with each reviewer parsing as many as 1,000 audio clips per shift, according to two workers based at Amazon’s Bucharest office, which takes up the top three floors of the Globalworth building in the Romanian capital’s up-and-coming Pipera district. The modern facility stands out amid the crumbling infrastructure and bears no exterior sign advertising Amazon’s presence.
The work is mostly mundane. One worker in Boston said he mined accumulated voice data for specific utterances such as “Taylor Swift” and annotated them to indicate the searcher meant the musical artist. Occasionally the listeners pick up things Echo owners likely would rather stay private: a woman singing badly off key in the shower, say, or a child screaming for help. The teams use internal chat rooms to share files when they need help parsing a muddled word—or come across an amusing recording.
Sometimes they hear recordings they find upsetting, or possibly criminal. Two of the workers said they picked up what they believe was an assault. When something like that happens, they may share the experience in the internal chat room as a way of relieving stress. Amazon says it has procedures in place for workers to follow when they hear something distressing, but two Romania-based employees said that, after requesting guidance for such cases, they were told it wasn’t Amazon’s job to interfere.
“We take the security and privacy of our customers’ personal information seriously,” an Amazon spokesman said in an emailed statement.
“We only annotate an extremely small sample of Alexa voice recordings in order [to] improve the customer experience. For example, this information helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests, and ensure the service works well for everyone.
“We have strict technical and operational safeguards, and have a zero tolerance policy for the abuse of our system. Employees do not have direct access to information that can identify the person or account as part of this workflow. All information is treated with high confidentiality and we use multi-factor authentication to restrict access, service encryption and audits of our control environment to protect it.”
Amazon, in its marketing and privacy policy materials, doesn’t explicitly say humans are listening to recordings of some conversations picked up by Alexa. “We use your requests to Alexa to train our speech recognition and natural language understanding systems,” the company says in a list of frequently asked questions.
In Alexa’s privacy settings, the company gives users the option of disabling the use of their voice recordings for the development of new features. A screenshot reviewed by Bloomberg shows that the recordings sent to the Alexa auditors don’t provide a user’s full name and address but are associated with an account number, as well as the user’s first name and the device’s serial number.
The Intercept reported earlier this year that employees of Amazon-owned Ring manually identify vehicles and people in videos captured by the company’s doorbell cameras, an effort to better train the software to do that work itself.
“You don’t necessarily think of another human listening to what you’re telling your smart speaker in the intimacy of your home,” said Florian Schaub, a professor at the University of Michigan who has researched privacy issues related to smart speakers.
“I think we’ve been conditioned to the [assumption] that these machines are just doing magic machine learning. But the fact is there is still manual processing involved.”
“Whether that’s a privacy concern or not depends on how cautious Amazon and other companies are in what type of information they have manually annotated, and how they present that information to someone,” he added.
When the Echo debuted in 2014, Amazon’s cylindrical smart speaker quickly popularised the use of voice software in the home.
Before long, Alphabet Inc launched its own version, called Google Home, followed by Apple Inc’s HomePod. Various companies also sell their own devices in China. Globally, consumers bought 78 million smart speakers last year, according to researcher Canalys. Millions more use voice software to interact with digital assistants on their smartphones.
Alexa software is designed to continuously record snatches of audio, listening for a wake word. That’s “Alexa” by default, but people can change it to “Echo” or “computer.” When the wake word is detected, the light ring at the top of the Echo turns blue, indicating the device is recording and beaming a command to Amazon servers.
Most modern speech-recognition systems rely on neural networks patterned on the human brain. The software learns as it goes, by spotting patterns amid vast amounts of data. The algorithms powering the Echo and other smart speakers use models of probability to make educated guesses. If someone asks Alexa if there’s a Greek place nearby, the algorithms know the user is probably looking for a restaurant, not a church or community centre.
Amazon’s review process for speech data begins when Alexa pulls a random, small sampling of customer voice recordings and sends the audio files to the far-flung employees and contractors, according to a person familiar with the programme’s design.

Leave a Reply

Send this to a friend