Building honest bots

epa05032563 Philipp Horst(R), project director of the exhibition 'The Robots', makes faces that are imitated by the robot 'Felix' at the DASA Working World Exhibition in Dortmund, Germany, 19 November 2015. The exhibition deals with the relationship between humans and machines and can be seen from 21 November 2015 until 25 September 2016.  EPA/BERND THISSEN

 

Bloomberg

Google can see a future where robots help us unload the dishwasher and sweep the floor. The challenge is making sure they don’t inadvertently knock over a vase — or worse — while doing so.
Researchers at Alphabet Inc. unit Google, along with collaborators at Stanford University, the University of California at Berkeley, and OpenAI — an artificial intelligence development company backed by Elon Musk — have some ideas about how to design robot minds that won’t lead to undesirable consequences for the people they serve. They published a technical paper Tuesday outlining their thinking.
The motivation for the research is the immense popularity of artificial intelligence, software that can learn about the world and act within it. Today’s AI systems let cars drive themselves, interpret speech spoken into phones, and devise trading strategies for the stock market. In the future, companies plan to use AI as personal assistants, first as software-based services like Apple Inc.’s Siri and the Google Assistant, and later as smart robots that can take actions for themselves.
But before giving smart machines the ability to make decisions, people need to make sure the goals of the robots are aligned with those of their human owners.
“While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative,” Google researcher Chris Olah wrote in a blog post accompanying the paper. “We believe it’s essential to ground concerns in real machine learning research and to start developing practical approaches for engineering AI systems that operate safely and reliably.”

ENOUGH STRUCTURE
The report describes some of the problems robot designers may face in the future, and lists some techniques for building software that the smart machines can’t subvert. The challenge is the open-ended nature of intelligence, and the puzzle is
akin to one faced by regulators in other areas, like the financial system; how do you design rules to let entities achieve their goals in a system you regulate, without being able to subvert your rules, or be unnecessarily constricted by them?
For example, if you have a cleaning robot (and OpenAI aims to build such a machine), how do you make sure that your rewards don’t give it an incentive to cheat, the researchers wonder. Reward it for cleaning up a room and it might respond by sweeping dirt under the rug so it’s out of sight, or it might learn to turn off its cameras, preventing it from seeing any mess, and thereby giving it a reward. Counter these tactics by giving it an additional reward for using cleaning products and it might evolve into a system that uses bleach far too liberally because it’s rewarded for doing so. Correct that by making its reward for using cleaning products tied to the apparent cleanliness of its environment and the robot may eventually subvert that as well, hacking its own system to make itself think it deserves a reward regardless.
While cheating with housecleaning may not seem to be a critical problem, the researchers are extrapolating to potential future uses where stakes are higher. With this paper, Google and its collaborators are trying to solve problems they can only vaguely understand before they manifest in real-world systems. The mindset is roughly: Better to be slightly prepared than not prepared at all.
“With the realistic possibility of machine learning-based systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents seem like a very concrete threat, and are critical to prevent both intrinsically and because such accidents could cause a justified
loss of trust in
automated systems,”
the researchers write in
the paper.
POTENTIAL SOLUTIONS
Some solutions the researches propose include limiting how much control the AI system has over its environment, so as to contain the damage, and pairing a robot with human buddy. Other ideas include programming “trip wires” into the AI machine to give humans a warning if it suddenly steps out of its intended routine.
The idea of smart machines going haywire is hardly new: Goethe wrote a poem in the late 18th century where a sorcerer’s apprentice creates a living broom to fetch water from a river for a basin in his house. The broom is so good at its job that it almost floods the house and so the sorcerer chops it up with an ax. New brooms emerge out of the fragments and continue with their tasks. Designing machines that avoid this kind of unintentionally harmful outcome is the core idea behind Google’s research.
The research is part of an ongoing line of inquiry that goes back more than 50 years, said Stuart Russell, a professor of computer science at the University of California at Berkeley and an author, with Google’s Peter Norvig, of
the definitive book on artificial intelligence. The fact that Google and other companies are getting involved in AI safety research is a further demonstration of the varied applications AI is finding in industry, he said. And the problems they’re trying to deal with are not hypothetical: Russell had a human cleaner in Paris who hid rubbish away in the apartment, which was only discovered by the landlord when he moved out, so a robot might do the same.
“Anyone that thinks for five seconds about whether it’s
a good idea to build something that’s more intelligent than you, they’ll realize that yes of course it’s a problem,” he said.

epa05038157 Visitors watch robot teams competing for the ball during a robot performance during the World Robot Conference at China national convention center in Beijing city, China, 23 November 2015. The conference is held during 23-25 November 2015 including the forum, exhibition and world adolescent robot contest.  EPA/WU HONG

epaselect epa05209455 Aurora Chiquot (R), communications manager for Aldebaran SoftBank Group, hugs the company's humanoid robot 'Pepper' at the Aldebaran stand at the CeBIT computer and technology fair in Hanover, Germany, 13 March 2016. The CeBIT 2016 will run from 14 to 18 March 2016.  EPA/OLE SPATA

epa05246634 A Robot of Qazvin Azad University prior to a soccer match during the 11th International IranOpen Robocup 2016, in Tehran, Iran, 06 April 2016. The event will be held at Tehran's International Fairgrounds from 06 to 08 April. More than two thousand students from 12 countries are set to take part in this four-day contest.  EPA/ABEDIN TAHERKENAREH

epa05303213 Honda's humanoid robot Asimo gestures during a demonstration at the carmaker headquarters' showroom in Tokyo, Japan, 13 May 2016. Honda Motor Co. said its net profit dropped by 32 per cent in fiscal year 2015 due to soaring costs generated by Takata Corp. air bags recall.  EPA/FRANCK ROBICHON

epa05038150 A girl watches a robot on shown during the World Robot Conference at China national convention center in Beijing city, China, 23 November 2015. The conference is held during 23-25 November 2015 including the forum, exhibition and world adolescent robot contest.  EPA/WU HONG

Leave a Reply

Send this to a friend