Click Image to Enlarge
Here is a robot's view of the world. In this demonstration, vision technology lets the robot retrieve workpieces out of a random mix and place them in parallel on a conveyor.
About 500 million years ago, there was an event that scientists call the Cambrian explosion. The number of animals visible in the fossil record suddenly increased, as if the variety of species had exploded overnight.
(If you wonder what this has to do with CNC machining, stay with me.)
A zoologist recently offered an explanation for the Cambrian explosion. This was when animals first developed eyes, he says. The presence of sighted creatures in the ecosystem caused all manner of animals to adapt in different ways. Animals' coloration, camouflage and visible shapes became important. Various methods of rapid movement also became important, because predators could now locate prey from a distance. One change in animals' senses set off an array of other developments. Add vision, says this explanation, and the entire system changes.
Fast-forward to manufacturing. The CNC machining process as we know it today is blind. We construct machining processes the way we do in order to compensate for the blindness. On a machine tool, either we fixture a part precisely where the program expects to find it, or else we allow the machine to "feel" for the part by using a probe. Adding a robot to the machine might make the process more efficient in some cases, but the robot is blind as well. Instead of setting up the machine, we place or fixture parts where the robot expects to find them.
But what if this blindness was cured?
Vision systems on robots will change automated production processes in fundamental ways.
I got a glimpse of the answer to this question earlier this year, when I visited Fanuc's manufacturing facility in Oshino, in the Yamanashi prefecture of Japan. Not surprisingly, Fanuc uses robots to make robots. (It uses them to make its other products, too.) In an automated machining area, robots load machine tools and perform related operations such as deburring. To that extent, the process is like that of many other automated manufacturers. The difference here is that these robots are equipped with vision systems. As a result, the work moves faster and the process saves expense in a variety of ways. The way operators' roles are different in a highly automated process may be apparent. But less apparent are the reasons why fixturing declines in significance because of the role of vision. Overhead cranes decline in significance, too. Thanks to vision, fundamental aspects of the process have changed.
Vision At A Glance
The robot equipped with vision technology is able to grab images in addition to grabbing physical objects. A robot attempting to visually recognize an object with this technology may seem to stare at it for a moment, almost as if in a daze. Fanuc president and CEO Dr. Yoshiharu Inaba is frank in describing this as one of the limitations of vision systems as they exist right now. A robot looking at a part might take 12 seconds to recognize it. This delay has the potential to be a bottleneck in certain processes.
Once the robot sees and recognizes the part, however, the recognition delivers a lot of information to the process. For example, the part number may be known. A networked machine tool receiving this information can then call up the corresponding program. The orientation of the part is also known, so the robot can calculate the moves necessary to grab the object and place it where it needs to go. Where today's robots tend to repeat the same precisely programmed sequences of motion over and over, a robot with a vision system might move differently every time it picks up a part (just as a human would), because the part is placed in front of it in a different way every time. A robot grabbing connecting rods out of a jumble in a bin might even grab the part from either the large hole or the small hole, whichever is easier, seeming to make that decision casually from one workpiece to the next.
However, the installation of a system such as this is not at all casual, and that is another limitation of the vision technology. A specialist in the technology needs to set up the system, preparing it for the range of parts and applications relevant to a particular plant. Vision technology is already practical, but its implementation is not yet as easy as it should be, Dr. Inaba says. One of Fanuc's priorities is to develop software that will make the implementation easy enough that vision-equipped robots can become more accessible.
Once the implementation is achieved, the result may be a dramatically different process. In the vision-equipped plant I visited, the difference was particularly clear during a walk down a corridor between two multi-level systems for storing parts.
On my right was the multi-level pallet storage area for a pallet cell using multiple machining centers. Every pallet was ready to be loaded into a machining center, and waiting to be retrieved by the system's pallet shuttle. Each of the pallets had a fixture dedicated to it. Some fixtures had workpieces clamped in place. According to Fanuc's view, pallet systems such as this one represent a previous generation of automated machining.
The newer generation is more streamlined. This was represented by the multi-level part storage area to my left. It was just a rack of shelves. Parts were placed there without fixtures and without precise location on a pallet. In an automated system with vision technology, labor may be one source of significant savings, but fixturing is another. No longer does every workpiece have to be fixtured, or even placed, before the automated sequence of the process can begin. The only fixturing for a given part number can simply remain inside the machine. The robot finds the part on its own and delivers it to that fixture.
One machining operation at this plant uses pull bolts within a hydraulic clamping system in order to hold a cast workpiece from the bottom, so that every other face of the part is available to be cut. The process has four of these pull bolts—and that's all. The bolts get passed from workpiece to workpiece. While one operation drills and taps the holes for these bolts, the robot uses a wrench to remove the bolts from the workpiece that has just been cut, so the robot can then install those bolts into the next workpiece in sequence.
Even the automated operations that are independent of a machine tool don't need the fixturing. For example, deburring takes place within a marked-off rectangle on the floor. Work goes to this rectangle on a simple pushcart. As long as the cart is pushed entirely into the rectangle, the two robots on either side can identify, locate and accurately debur the part.
This deburring station works because of the weight of the large parts processed here. No fixturing is needed to hold the part because the force of deburring isn't enough to budge it. For parts that are lighter, Fanuc engineers suggest a different solution. One robot can pick up the part and hold it steady for the other robot, which does the deburring.
The use of vision systems may also result in less use of cranes. For heavy parts, the overhead crane is such a staple of production that we often don't think of how time-consuming it is to use the crane to finesse a part into place. But when the process is changed so that operators don't have to place parts in a predictable way before machining can begin, this use of the crane becomes unnecessary. The part can simply be wheeled over to within the robot's reach.
Add vision and the system changes. However, perhaps not everything changes. The work still has to be fixtured at some point—now it's just the robot placing the part where it goes. The way the robot does this placement might look familiar. The robot may set a large workpiece out of position to begin, look at the part to size up how much it needs to move, then nudge the part as needed to get it just into place. This nudging—that is, the robot examining and finessing the part—is very suggestive of the way an operator might perform the same setup.blog comments powered by Disqus