High-volume manufacturing and robotic automation go hand in hand. Machine-tending robots used in these production schemes need not be very flexible, as they may just load and unload one type of part hundreds of thousands of times.
However, this lack of flexibility to adapt to frequent job change-overs and small batch sizes is what has hindered robotic automation's integration into contract shops. The V-500iA PC-based vision system, developed by FANUC Robotics America, Inc. (Rochester Hills, Michigan), was designed to make robotic automation of low-volume manufacturing practical so shops of all sizes can enjoy the benefits that automated machine tending offers—higher spindle utilization, the possibility for automatic deburring, gaging or assembly, and smarter use of labor.
According to Richard Johnson, FANUC's general manager of material handling, this latest PC-based vision technology enables robots to pick parts without the use of dedicated part-positioning fixtures. Parts now can be presented to a robot in a less structured manner (on a belt conveyor, for example), and robot setup may be just a matter of changing the robot's gripper to pick a new type of part. In addition to locating parts, the vision technology can also discern between different parts within a part family.
Here Mr. Johnson addresses some questions that contract shops may have about current vision systems for machine-tending robots.
- Where does the camera mount? A vision system starts with one or more cameras, which essentially snap photos of a part so the system's software can determine part location and orientation. A camera can be installed onto a robot's articulating arm to allow image capture from a variety of angles. Static cameras may be mounted remotely above or adjacent to the location where a robot would pick a part. Remote mounting allows cameras to be positioned higher than a robot's arm can reach to maximize the camera's depth of field. For parts stored on a multi-level tray magazine, for example, this high position allows the camera to focus on every tray from top to bottom.
- Can current vision systems "see" better? Earlier vision systems were extremely sensitive to lighting and contrast. Those systems were binary-based—that is, each pixel in the image was either on or off. Many current vision systems are grayscale-based and their software has post-processing capabilities that can further improve image quality. These grayscale systems are more forgiving than binary systems in terms of lighting and contrast, but effective lighting is always a key part of a vision system's reliability.
- How can parts be presented to a robot? Vision systems can locate parts that may be presented in numerous orientations on trays, conveyors or structured bins. FANUC offers fixtureless parts storage magazines, which use multiple trays that the robot can flip up and out of the way once emptied, allowing access to lower trays stocked with parts. The software used by the V-500iA system has a geometric pattern-matching feature that examines the geometry of an object, rather than looking for a trained pixel pattern. This is helpful in locating parts that may not be presented to the robot in the same orientation, or for parts that are partially covered.
- Does the application require a 2D or 3D system? For small parts located somewhere on an X-Y plane (such as on a conveyor or tray), a 2D system may suffice. For larger parts, or for parts presented to the robot that are offset in X, Y and Z planes (or yaw, pitch and roll), 3D is often required.
- What are the trends in robot design? Machine tending robots are becoming more compact, faster and stronger. Compact design allows easier relocation within a shop; faster robot speed can improve throughput; and increased payload capacity permits manipulation of larger parts.