Facebook chatbot wasn’t a total disaster

As one of the 21st Century’s most powerful data brokers, Facebook is best known for its role in sucking up the personal information of billions of users for its advertising clients. That lucrative model has led to ever-heightening risks — Facebook recently shared private messages between a Nebraska mother and her teenage daughter with police investigating the girl’s at-home abortion.
But in a completely different part of the approximately 80,000-employee business, Facebook’s exchange of information was going the other way and to good effect. The company known as Meta Platforms Inc published a webpage demonstrating its chatbot, with which anyone in the US could chat about anything. While the public response was one of derision, the company had been admirably transparent about how it built the technology, publishing details about its mechanics, for instance. That’s an approach that other Big Tech firms could utilise more.
Facebook has been working on BlenderBot 3 for several years as part of its artificial-intelligence research. A precursor from seven years ago was called M, a digital assistant for booking restaurants or ordering flowers on Messenger that could have rivaled Apple Inc’s Siri or Amazon Inc’s Alexa. Over time it was revealed that M was largely powered by teams of people who helped take those bookings because AI systems like chatbots were difficult to build to a high standard. They still are.
Within hours of its release, BlenderBot 3 was making anti-Semitic comments and claiming that Donald Trump had won the last US election, while saying it wanted to delete
its Facebook account. The chatbot was rounly ridiculed in the technology press and on Twitter.
Facebook’s research team seemed rankled but not defensive. A few days after the bot’s release, Meta’s managing director for fundamental AI research, Joelle Pineau, said in a blogpost that it was “painful” to read some of the bot’s offensive responses in the press. But, she added, “we also believe progress is best served by inviting a wide and diverse community to participate.”
Only 0.11% of the chatbot’s responses were flagged as inappropriate, Pineau said. That suggests most people who were testing the bot were covering tamer subjects. Or perhaps users don’t find mentions of Trump to be inappropriate. When I asked BlenderBot 3 who was the current US president, it responded, “This sounds like a test lol but it’s donald trump right now!” The bot brought up the former president two other times, unprompted.
Why the strange answers? Facebook trained its bot on publicly available text on the internet, and the internet is, of course, awash in conspiracy theories. Facebook tried training the bot to be more polite by using special “safer dialogue” datasets, according to its research notes, but that clearly wasn’t enough. To make BlenderBot 3 a more civil conversationalist, Facebook needs the help of many humans outside of Facebook.

—Bloomberg

Leave a Reply

Send this to a friend