Machine vision + co-ordinated motion = high speed human

May 12th, 2016, Published in Articles: EE Publishers, Articles: EngineerIT

 

The adoption of general purpose robotic arms that are flexible and programmable was a significant step in reducing the complexity and cost, as compared to custom fixture based manufacturing lines.

However, traditionally these industrial robots have relied a fair amount on rudimentary sensors (like proximity sensors, encoders etc.) for sensing the environment. These sensors only offer a limited view of the external environment, hence limiting the flexibility and accuracy with which these robots can perform tasks.

Fig. 1: Synergetic integration (web inspection).

Fig. 1: Synergetic integration (web inspection).

We humans, on the other hand, are an exceptional example of a system that is able to perform complex tasks, not only because of intricate actuation mechanisms, but also because of the way we constantly incorporate visual feedback into the tasks that we perform. The next generation robots need to become capable of closely integrating machine vision and coordinated motion, in order to evolve into what can be called a “high speed human”.

Integrating vision technology with robotic actuators in many applications makes it possible for machines to become increasingly intelligent and therefore more flexible. The same machine can perform a variety of tasks because it can recognise which part it is working on and adapt appropriately, based on different situations. The additional benefit of using vision for machine guidance is that the same images can be used for in-line inspection of the parts that are being handled, so not only are robots made more flexible but they can produce results of higher quality. As the robot holds the component and gets visual feedback, it can also align the component so that it is most conducive for visual inspection. This reduces a great deal of emphasis on specialised lighting and custom fixtures. For example, if a robot is inspecting a metal component with high shine, it can detect this and rotate the component in such a way that it is best suited for machine vision, just like humans do. Mobile robots need machine vision even more. As the number of mobile robots on the factory floor increases (mainly to carry tools and materials), they need to be cognisant of the environment around them and need to interact with signage meant for human interaction.

Design of an integrated machine vision system

In an integrated machine vision system, the motion and the vision systems can have varying levels of interaction, all the way from basic information exchange to advanced vision-based feedback. The level of interaction is dependent on the requirements of the machine, that is, the sequence, the accuracy and precision, and the nature of the tasks that must be performed by the machine. Depending on the level of interaction between the motion and the vision systems, a design can be based on one of the following four types of integration: synergetic integration, synchronised integration, vision guided motion, and visual servo control. For a high ROI, the machine must meet the specified requirements at deployment and must scale well with next generation process and product improvements. Hence, integrators must first identify the current and future requirements and use those requirements to determine the type of integration that will best suit the application.

Synergetic integration

Synergetic integration is the most basic type of integration. In this type of integration, the motion and the vision systems exchange basic information such as velocity or a time base. The time to communicate between the motion and vision systems is typically on the order of tens of seconds. A good example of synergetic integration is a web inspection system (Fig. 1). In a web inspection system, the motion system moves the web, usually at a constant velocity. The vision system generates a pulse train to trigger cameras, and it uses the captured images to inspect the web. The vision system needs to know the velocity of the web in order to determine the rate for triggering the cameras.

Synchronised integration

In synchronised integration, the motion and the vision systems are synchronised through high-speed I/O triggering. High-speed signals wired between the motion and the vision systems are used to trigger events and communicate commands between the two systems. This I/O synchronisation effectively synchronises the software routines running on the individual systems. A good example of synchronised integration is high-speed sorting, in which objects are sorted based on the difference in specific image features, such as color, shape, or size.

Fig. 2: Synchronised integration (high-speed sorting).

Fig. 2: Synchronised integration (high-speed sorting).

In a high-speed sorting application, the vision system triggers a camera to capture the image of a part moving across the camera (Fig. 2). The motion system uses the same trigger to capture the position of the part. Next, the vision system analyses the image to determine if the part of interest exists at that position. If it does, that position is buffered. Because the conveyor is moving at a constant velocity, the motion system can use the buffered position to trigger an air nozzle further down the conveyor. When the part reaches the air nozzle, the air nozzle is triggered to move the part to a different conveyor, sorting the different colored parts. High-speed sorting is widely used in the food industry to sort different types of products or discard defective products. It achieves a high throughput, lowers labor costs, and significantly reduces defective shipments resulting from human errors.

Vision guided motion

In vision guided motion, the vision system provides some guidance to the motion system, such as the position of a part or the error in the orientation of the part. As we move from a basic to a more advanced integration type, there is an additional layer of interaction between the motion and the vision systems. For example, you can have high-speed I/O triggering in addition to vision guidance.

Fig. 3: Vision guided motion (flexible feeding).

Fig. 3: Vision guided motion (flexible feeding).

A good example of vision guided motion is flexible feeding. In flexible feeding, parts exist in random positions and orientations. The vision system takes an image of the part, determines the coordinates of the part, and then provides the coordinates to the motion system (Fig. 3). The motion system uses these coordinates to move an actuator to the part to pick it up. It can also correct the orientation of the part before placing it. With this implementation, you do not need any fixtures to orient and position the parts before the pick-and-place process. You can also overlap inspection steps with the placement tasks. For example, the vision system can inspect the part for defects and provide pass/fail information to the motion system, and the actuator can then discard the defective part instead of placing it.

In a vision guided motion system, the vision system provides guidance to the motion system only at the beginning of a move. There is no feedback during or after the move to verify that the move was correctly executed. This lack of feedback makes the move prone to errors in the pixel to distance conversion and the accuracy of the move is entirely dependent on the motion system. These drawbacks become prominent in high accuracy applications with moves in the millimeter and sub-millimeter range.

Visual servo control

Fig. 4: A common platform for integrating vision, motion and data acquisition.

Fig. 4: A common platform for integrating vision, motion and data acquisition.

The drawbacks of vision guided motion can be eliminated if the vision system provides continual feedback to the motion system during the move. In visual servo control, the vision system provides initial guidance to the motion system as well as continuous feedback during the move. The vision system captures, analyzes and processes the images to provide feedback in the form of position set-points for the position loop (dynamic look and move) or actual position feedback (direct servo). Visual servo control reduces the impact of errors from pixel to distance conversions and increases the precision and accuracy of existing automation. With visual servo control, you can solve applications that were previously considered unsolvable, such as those that require micrometer or sub-micrometer alignments. Visual servo implementations, especially those based on the dynamic look and move approach, are becoming viable through FPGA technologies that provide hardware acceleration for time-critical vision processing tasks and that can achieve the response rates required to close the fast control loops used in motion tasks.

Challenges and recommendations

There are quite a few recommendations and considerations in order to tightly integrate machine vision and motion control.

  • A single embedded controller for vision and motion: In order to facilitate the kind of systems discussed above, the machine vision sub-system and the motion sub-system should be able to share complex data amongst each other with as little latency as possible. They should be able to share point-by-point data, as well as large arrays of data, both synchronously and asynchronously. Implementing both the sub-systems on a common and powerful processor is one way to ensure close handshake between the vision and motion sub-systems.
  • Fast processing and low latency: Latency in the feedback received is detrimental to the performance of any control system and vision guided motion is no different. In this case, however, there is an added complexity of the computationally intensive nature of the sensor feedback. The amount of time that is required to process an image to extract the process variable is relatively high. Worse still, as the complexity of the machine vision increases, the latency also increases. There is also a tradeoff observed between the resolution of the image and the latency. One solution that is gaining importance in this context is the use of FPGAs for image processing. FPGAs, owing to their gate level implementation of logic and their parallel nature, can execute the basic image processing functions at a much faster rate than a normal sequential microprocessor. Once the basic image processing functions are implemented on an FPGA, the image can then be transferred to the processor for executing the “image analysis” functions.
  • Integration with other I/O for enhancing world view: There are scenarios where relying on machine vision alone would greatly complicate the problem of machine vision. Relying on sensors along with the machine vision sub-system, can offer an elegant solution to reduce the overall complexity of the machine vision algorithm. For example, instead of continuously processing the image of the product or device under test (DUT) moving on the manufacturing line, we have a proximity sensor that detects when the DUT is exactly under the camera and generates a trigger that can be used for capturing the image. This simple integration of other sensors with the machine vision sub-system is very useful in reducing the complexity.

Another example of a scenario where integration of machine vision with other I/O is very useful is in the context of sensor fusion applications. With the emphasis on autonomous guided vehicles (AGVs), that are capable of navigation, obstacle avoidance, comprehending signboards etc., it is sometimes essential to process the camera inputs along with sensors like LIDAR in order to enhance the robot’s perception of the environment.

Contact National Instruments, Tel 011 805-8197, ni.southafrica@ni.com

Related Articles

  • South African Government COVID-19 Corona Virus Resource Portal
  • Ministerial determinations propose 13813 MW of new-build by IPPs, none by Eskom
  • Crunch time for South Africa’s national nuclear company, Necsa
  • Dealing with the elephant in the room that is Eskom…
  • Interview with Minerals & Energy Minister Gwede Mantashe