Bots with Brains: Future Robotic Overlords?
May 22, 2007
Science fiction has portrayed machines capable of thinking and acting for themselves with a mixture of both anticipation and dread, but what was once the realm of fiction has yet again become the subject of serious debate as robots become more intelligent.
Urada's death is often said to mark the first recorded victim to die at the hands of a robot, although Robert Williams was killed by a robot two years earlier. Since both deaths, and despite the introduction of improved safety mechanisms, there have been many more gruesome industrial fatalities involving robots crushing humans, smashing their heads and even pouring molten aluminum over them.
As even nonthinking robotic machines clearly can be fatal, and as robots emerge from the factory floor into homes and workplaces, and develop to a point where they can make their own decisions, there are growing demands that they should be bound by ethical laws.
South Korea, which spends about $80 million a year to develop robots, predicts there will be a robot in every household in little more than a decade. This is not necessarily worth writing home about robot vacuum cleaners, which can "decide" for themselves when to move from room to room, as well as robotic toys and lawnmowers, are already in many households except that the country's Ministry of Commerce, Industry and Energy's robot team also predicts these robots would develop "strong intelligence."
Indeed, the creation of a superhumanly intelligent artificial intelligence (AI) system could be possible within 10 years, with an "AI Manhattan Project," Dr. Ben Goertzel, CEO and Chief Scientist of AI firm Novamente LLC and bioinformatics firm Biomind LLC, recently wrote.
In the late 1990s, there was Deep Blue (pictured to the right, Credit: Avaye), the IBM computer programmed to process millions of alternative chess positions per second. With its massive computational ability, a technique of AI known as "brute force," Deep Blue could analyze instantly every move chess prodigy and world champion Garry Kasparov made and (theoretically) compute the best counter measure. Sure enough, Deep Blue won the first game of their match, becoming the first computer ever to defeat a human in chess under regulation time controls.
Software robots basically, complicated computer programs already make important financial decisions. Whose fault is it if they make a bad investment?
But there's a more sinister aspect that is being debated.
Autonomous robots, which are able to make decisions without human intervention, increasingly are being applied to military roles. There is also the DARPA Grand Challenge, a robotic contest for building a driverless car capable of successfully completing a 132-mile off-road course; this year, rather than navigating across the desert, the vehicles will be required to negotiate a 60-mile course through simulated traffic in less than six hours.
The development and deployment of these autonomous robots raises difficult questions, according to Professor Allen Winfield of the University of West England.
"If an autonomous robot kills someone, whose fault is it?" Professor Winfield asks.
Speaking ahead of a public debate at the Dana Centre, part of London's Science Museum, scientists expressed concern about the use of decision-making robots, particularly for military use, BBC News reported last month. And a group of leading roboticists called the European Robotics Network (Euron) has even started lobbying governments for legislation.
At the top of their list of concerns: safety.
Robots were once confined to specialist applications in industry and the military, where users received extensive training on their use, but they are increasingly being used by ordinary people. And as these robots become more intelligent, it will become harder to decide who is responsible if they injure, or kill, someone: the designer, the user, the robot itself?
Currently, experts in South Korea are drawing up an ethical code to prevent humans abusing robots, and vice versa. The committee's ethical code draws in part from the "Three Laws of Robotics," introduced by renowned author Isaac Asimov as early as the 1940s and since used often in works of science fiction by other authors:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Robot laws rise to popular debate yet again due to the clear fact that robots are becoming more mainstream and, more important, based in real science rather than in science fiction.
Earlier this month, RoboBusiness 2007, the international business development event for the mobile robotics and intelligent systems industry, showcased a motley mix of consumer, commercial and military robots in Boston, Mass. At the conference, Carnegie Mellon University announced its 2007 Robot Hall of Fame inductees, comprised of both real and science fiction robots and for the first time, the jury selected more robots from science in fact than science fiction. Three of the four robots selected by a jury of 25 leading roboticists were built by actual scientists.
In many ways, the Deep Blue chess match was inconclusive in settling the "human" versus "artificial" intelligence debate. If Kasparov's humanity his ability to reason was his strength, so was it also his weakness, The News-Journal recently noted. Before game six, Kasparov was mentally drained and played cautiously, and when he made a disastrous mistake early in the game, he resigned himself to losing. Unlike computers, humans feel.
The issue of robot rights was again addressed last December after a speculative paper commissioned by the British government suggested robots might one day be smart enough to demand emancipation from human owners and raised the possibility that they might have to be treated as citizens.
Yet, to paraphrase Alden March Bioethics Institute director Glenn McGee's recent column in The Scientist, we are much closer to making stronger, more intelligent robots than we are to creating a code of ethics to guide our stewardship of robo-peers.
Will thinking robots ever become Data-like androids or humanoid Cylons, capable of interacting as smarter human peers? Will the world's Roombas and RoboSapiens one day tire of their servitude and attempt to unleash Judgment Day on their foolish masters?
Are you anxious for or dreading the rise of intelligent robots? Is the debate even worth entering?
4:16 min. ... and laughably sinister
UPDATE (5/25): Whoa, more than 100 comments about this blog post on Digg; more than 700 "diggs" by users. IMT readers can do the same by clicking on the "Digg it" link between our resources and reader comments below.
How to Survive a Robot Uprising: Tips on Defending Yourself Against the Coming Rebellion by Daniel H. Wilson, Bloomsbury USA (2005)
Robot future poses hard questions BBC News, April 24, 2007
South Korea drawing up code of ethics for robots Reuters, May 7, 2007
Computer bested humanity, technically by Bruce Kauffmann The News-Journal, May 9, 2007
Ethical code for robots in works, South Korea says CBC News, March 7, 2007
A Robot Code of Ethics by Glenn McGee The Scientist, May 2007
The ethical dilemmas of robotics
by Chris Barron
BBC News, March 9, 2007