Best Practices for Image Recognition Transforms

When an MxSuite project uses a significant number of Image Recognition Transforms for image processing,  the timing of CAN or other signals to the DUT can be affected.   This topic explains why this happens and identifies some of the ways to minimize this impact and maintain performance as well as other things to take into consideration when developing MxSuite tests.

The Impact of Image Recognition Software on Performance

When a CaptureImage event is issued in an MxVDev TestCase, MxSuite executes all of the Image Recognition Transforms referenced by the Scenario (not just the TestCase of interest).  Consequently, if a Scenario contains a large number of Signals derived from Image Recognition Transforms then processor performance is affected. Time critical events in the neighborhood of the Capture event will be delayed.   The delay eventually goes away as MxVDev catches up, but CAN and other signal traffic will be affected before it does.

In addition, using large areas of interest (such as the whole camera field of view) bogs down the NI Vision software and introduces longer delays.

The following two practices help maximize performance of MxSuite when using Image Processing:

hmtoggle_plus11. Minimize Areas of Interest
hmtoggle_plus12. Ensure Your TestCases Account for Image Processing Delays


Designing TestCases

Use the following information to help you modify TestCases created with versions older than as well as how to write new TestCases.

Image Recognition Signal Pass/Fail Timing

In MxSuite and later, the timing of Image Recognition Signals has changed to reflect the fact that Image Recognition doesn’t happen in zero real time. In older versions, Recognition Signals were re-timed to line up with the Capture Signal that initiated them.  Now, each Recognition Signal is time stamped with its actual completion.  This is useful in that it gives an indication of when recognition is busy so you can avoid issuing a new Capture before the processing of a previous Capture is complete.

It is important to note that the recognition result, even though it lags, is based on what the camera "saw" when the Capture event was sent.

To get a handle on how long recognition may take, you can use the following as a rough guide:

For a GigE camera, the duration ~ .006 sec per recognition Signal in the Scenario. Additionally, these is an extra delay in the response depending on how many recognition images (regions of interest) are saved, which is ~.002 seconds for each saved image.

For example, if you have 3 image recognition Signals in the Scenario, two of which save an image of their region of interest, then the pass/fail for image recognition TestCases can be decided after ~.022 seconds (which is (0.006*3) +(0.002*2) =0.022).

The actual duration is dependent on factors that affect computation time such as the PC model, number of cameras, and sizes of the areas of interest for the various Signals.

What needs to be done for existing test cases:

Instead of putting the expected transition on the Image recognition Signal at the same time as the CaptureImage event (as is currently done), delay the expected transition after the CaptureImage event.


1.First determine the maximum image processing delay.  Use a TestCase that uses the maximum-sized region of interest. Run the test several times and observe the delay in Image recognition signal after the image is captured. Take the maximum reading observed and note this delay-value.  For all TestCases, extend the no check DataBlock using this delay value.

2.In your TestCase, right-click on the DataBlock which is disabled for Pass/Fail checking. Then select DataBlock -> Properties.


Add the delay value to the original Duration value in General tab of the DataBlock properties to extend the DataBlock. The Pass/Fail checking for Image recognition starts once the Image recognition results are available.