
What do Facebook co-founder Mark Zuckerberg and Tesla Chief Executive Elon Musk have in common? Both are grappling with big problems that stem, at least in part, from putting faith in artificial intelligence (AI) systems that have underdelivered. Zuckerberg is dealing with algorithms that are failing to stop the spread of harmful content; Musk with software that has yet to drive a car in the ways he has frequently promised.
There is one lesson to be gleaned from their experiences: AI is not yet ready for prime time. Furthermore, it is hard to know when it will be. Companies should consider focusing on cultivating high-quality data — lots of it — and hiring people to do the work that AI is not ready to do. Designed to loosely emulate the human brain, deep-learning AI systems can spot tumors, drive cars and write text, showing spectacular results in a lab setting. But therein lies the catch. When it comes to using the technology in the unpredictable real world, AI sometimes falls short. That’s worrying when it is touted for use in high-stakes applications like healthcare.
The stakes are also dangerously high for social media, where content can influence elections and fuel mental-health disorders. But Facebook’s faith in AI is clear on its own site, where it often highlights machine-learning algorithms before mentioning its army of content moderators. Zuckerberg also told Congress in 2018 that AI tools would be “the scalable way†to identify harmful content. Those tools do a good job at spotting terrorist-related content, but they still struggle to stop misinformation from propagating.
The problem is that human language is constantly changing. Anti-vaccine campaigners use tricks like typing “va((ine†to avoid detection, while private gun-sellers post pictures of empty cases on Facebook Marketplace with a description to “PM me.†These fool the systems designed to stop rule-breaking content, and to make matters worse, the AI often recommends that content too.
Little wonder that the roughly 15,000 content moderators hired to support Facebook’s algorithms are overworked. Last year a New York University Stern School of Business study recommended that Facebook double those workers to 30,000 to monitor posts properly if AI isn’t up to the task.
Cathy O’Neil, author of Weapons of Math Destruction has said point blank that Facebook’s AI “doesn’t work.†Zuckerberg for his part, has told lawmakers that it’s difficult for AI to moderate posts because of the nuances of speech.
Musk’s overpromising of AI is practically legendary. In 2019 he told Tesla investors that he “felt very confident†there would be one million Model 3 on the streets as driverless robotaxis. His timeframe: 2020. Instead, Tesla customers currently have the privilege of paying $10,000 for special software that will, one day deliver fully-autonomous driving capabilities. Till then, the cars can park, change lanes and drive onto the highway by themselves with the occasional serious mistake.
—Bloomberg