Can industrial robots save your company…or the nation? Forbes proposes as such. Meanwhile, Stanford computer scientists recently unveiled a machine vision algorithm that gives robots the ability to approximate distances from single still images (i.e., depth estimation).
In North America alone, close to 15,000 industrial robots, worth more than $900 million, have been purchased. These figures are on track to double — then double again — at which point the U.S. might catch up with Japan and, therefore, decrease the need to outsource many manufacturing and distribution jobs to China.
Some 45 percent of the robots that were purchased in North America in 2005 were materials-handling robots: in logistics, industrial robots can deliver consistent quality in demanding applications, such as mixed-load palletizing; in distribution or manufacturing, this may involve individual pallets that must each be stacked with a variety of products in a specific way.
This according to a recent Forbes article.
Where do robots stand relative to domestic and foreign human workers’ wages? Consider some of these numbers:
• The average wage for a U.S. warehouse or distribution worker is around $15 per hour (plus benefits);
• The average wage for this same work in China is about $3 per hour;
• The average wage for a skilled UAW U.S. automobile worker is $25 to $30 per hour, plus the staggering costs of health care coverage and retirement; and…
• The average cost per hour to operate an industrial robot is “30 cents per hour,” according to Ron Potter, director of robotic technologies of Factory Automation Systems.
Even if this last figure is doubled to 60 cents and includes a vision system, a software package and yearly maintenance, it is still only one-fifth the cost per hour of a Chinese laborer.
“There is, of course, an initial purchase and installation cost (approximately $60,000), but this can be amortized in a few years,” the magazine said. Afterward, the cash flow is “impressive.” Further, robots’ many necessary tools, such as end effectors (grippers for instance), have to be matched in their costs against tools used by manual labor.
Aside from the occasional holiday-related pseudo-robot decapitation, robots no doubt will have a fairly long life in relation with manufacturing and/or distribution, particularly considering the cost efficiencies and its viable alternative to offshore outsourcing.
Recently, Stanford computer scientists unveiled a machine vision algorithm that gives robots the ability to approximate distances from single still images. More frequently than not, most robots are too clumsy to move around obstacles at high speeds.
However, “with substantial sensor arrays and considerable investment, robots are gaining the ability to navigate adequately.” Stanley, for example, the Stanford robot car that drove a desert course in the DARPA Grand Challenge this past October, used lasers, radar and a video camera to scan the road ahead. Using the work of Stanford computer science Assistant Professor Andrew Ng and his students, a Science Daily article reported this week, robots that are too small to carry many sensors or that must be built cheaply could navigate with merely one video camera. In fact, “using a simplified version of the algorithm, Ng has enabled a radio-controlled car to drive autonomously for several minutes through a cluttered, wooded area before crashing.”
To give robots depth perception, Ng and two graduate students designed software capable of learning to spot certain depth cues in still images. The cues include variations in texture — surfaces that appear detailed are more likely to be close — in edges — lines that appear to be converging, such as the sides of a path, indicate increasing distance — and in haze — objects that appear hazy are likely further.
The software breaks images into sections and analyzes them both individually and relative to neighboring sections; this to analyze as thoroughly as possible. The software also looks for cues in the image at varying levels of magnification to ensure that it doesn’t miss prevailing details.
According to Science Daily:
Using the Stanford algorithm, robots were able to judge distances in indoor and outdoor locations with an average error of about 35 percent — in other words, a tree that is actually 30 feet away would be perceived as being between 20 and 40 feet away. A robot moving at 20 miles per hour and judging distances from video frames 10 times a second has ample time to adjust its path even with this uncertainty. Ng points out that compared to traditional stereo vision algorithms — ones that use two cameras and triangulation to infer depth — the new software was able to reliably detect obstacles five to 10 times farther [sic.] away.
Although this can be significantly improved, this depth estimation is not bad — especially when you consider how most robots currently are too clumsy when moving around obstacles at high speeds; they’re about as clumsy as a two year old on skis (…or that old Irish guy swaying and singing in the corner of the pub every night at last call).
One more thing: An individual robot five years ago cost one-fifth what it would have cost in 1990. The prices continue to drop, and some manufacturers and distributors have gotten the message. According to the Robot Industry Association, North American manufacturers have “increased their purchases by 30% as of the third-quarter 2005 over 2004.”