LOS ANGELES — reported last month that California-based Zoom was looking into building such tools, which could use artificial intelligence to scan facial movements and speech to draw conclusions about people’s mood.
“If Zoom advances with these plans, this feature will discriminate against people of certain ethnicities and people with disabilities, hardcoding stereotypes into millions of devices,” said Caitlin Seeley George, director of campaign and operations at Fight for the Future, a digital rights group. The company has already built tools that purport to analyze the sentiment of meetings based on text transcripts of video calls, and according to Protocol it also plans to explore more advanced emotion reading tools across its products.describing the sentiment analysis technology, Zoom said its tools can measure the “emotional tone of the conversations” in order to help salespeople improve their pitches.
“This move to mine users for emotional data points based on the false idea that AI can track and analyze human emotions is a violation of privacy and human rights,” said the letter, a copy of which was sent to the Thomson Reuters Foundation.From classrooms to job interviews and in public places, emotional recognition tools are increasingly common, despite questions about their accuracy and human rights implications.