The integration of visual and tactile sensing for the definition of regions within a robot workcell
Vision systems are widely used in robot workcells for sensory feedback. The resolution of a vision system is usually good enough to locate an object so that it can be grasped, but not good enough to accurately locate an insertion hole. Tactile probes are used to accurately locate objects. However, they require a data base containing the approximate location of an object in order to be used effectively.
This thesis presents the development of a robot workcell which utilizes a vision system and tactile probe to identify, locate, and orientate two types of circuit board fixtures. The vision system approximately locates the corner points of each fixture in the robot workcell. The tactile system uses the data base created by the vision system to conduct a tactile search for each fixture and to accurately define the coordinates of each corner point. After a fixture is accurately located, a region (sub-coordinate system) is defined about the fixture. The location of each insertion hole within a fixture is defined relative to the region and the robot subsequently inserts the tactile probe into each hole.
The vision system developed can define any two dimensional object and can locate the corner points of any straight edged object, whose adjacent sides have an included angle greater than 90 degrees. The tactile system is self calibrating and has a repeatability of 0.009 inches.
A probe insertion error analysis was conducted on the system. The average probe insertion error for the system was determined to be 0.0337 inches. In addition, it was determined that probe insertion error increases with the distance between a hole and the origin of its defining region, and that the major source of probe insertion error is the robot language's (AML/E Verion 4.0) inability to accurately define points within a region.