Best Practices for Image Recognition Transforms
When an MxSuite project uses a significant number of Image Recognition Transforms for image processing, the timing of CAN or other signals to the DUT can be affected. This topic explains why this happens and identifies some of the ways to minimize this impact and maintain performance as well as other things to take into consideration when developing MxSuite tests.
When a CaptureImage event is issued in an MxVDev TestCase, MxSuite executes all of the Image Recognition Transforms referenced by the Scenario (not just the TestCase of interest). Consequently, if a Scenario contains a large number of Signals derived from Image Recognition Transforms then processor performance is affected. Time critical events in the neighborhood of the Capture event will be delayed. The delay eventually goes away as MxVDev catches up, but CAN and other signal traffic will be affected before it does.
In addition, using large areas of interest (such as the whole camera field of view) bogs down the NI Vision software and introduces longer delays.
The following two practices help maximize performance of MxSuite when using Image Processing:
Vision processing software is CPU-intensive as well as memory-intensive. The larger the captured field of image is, the slower and choppier the workstation appears to run. Rather than capturing large field of views and selecting large regions of interest, instead try to minimize the fields of interest to only what is necessary. You may want to organize your TestCases around specific features that affect certain areas of the cluster.
Minimize the number of image-recognition Signals in a Scenario. Rather than testing all features in a single Scenario, create one or more scenarios related to a single feature (such as seat belt warning or door ajar warning). The TestCases in those Scenarios should use Signals necessary to the feature, and not include extraneous Signals which could rob processor performance. Scenarios can be run separately or grouped as desired in a regression test.
The rule of thumb is to divide the test plan’s tests between several Scenarios (grouping them by feature). Treat Scenarios as if each one corresponds to a test or a closely related group of sub-tests.
Use separate Scenarios to test different functional uses cases instead of using single Scenario to test all features (Use separate scenario for each of the various warning displays such as seat belt, door ajar, and tire pressure). Use the Regression Test feature in MxVDev to string together the Scenarios into larger groups. The Scenarios belonging to warnings can be grouped together in one or more regression tests. This minimizes the number of Image Recognition firings when a Capture Event is issued. (Tags can be used to identify which Scenarios are related for Regression Tests). This also improves the readability of the project and future maintenance becomes simpler when similar items are grouped together.
You don’t want leftover conditions from a previous scenarios to affect later ones while testing or when strung together in a Regression Run. Scenarios should begin with a TestCase that initializes the DUT and end with a TestCase that shuts down the DUT; this allows Scenarios to be run independently.
Use the following information to help you modify TestCases created with versions older than 126.96.36.199050 as well as how to write new TestCases.
In MxSuite 188.8.131.52050 and later, the timing of Image Recognition Signals has changed to reflect the fact that Image Recognition doesn’t happen in zero real time. In older versions, Recognition Signals were re-timed to line up with the Capture Signal that initiated them. Now, each Recognition Signal is time stamped with its actual completion. This is useful in that it gives an indication of when recognition is busy so you can avoid issuing a new Capture before the processing of a previous Capture is complete.
It is important to note that the recognition result, even though it lags, is based on what the camera "saw" when the Capture event was sent.
To get a handle on how long recognition may take, you can use the following as a rough guide:
For a GigE camera, the duration ~ .006 sec per recognition Signal in the Scenario. Additionally, these is an extra delay in the response depending on how many recognition images (regions of interest) are saved, which is ~.002 seconds for each saved image.
For example, if you have 3 image recognition Signals in the Scenario, two of which save an image of their region of interest, then the pass/fail for image recognition TestCases can be decided after ~.022 seconds (which is (0.006*3) +(0.002*2) =0.022).
The actual duration is dependent on factors that affect computation time such as the PC model, number of cameras, and sizes of the areas of interest for the various Signals.
Instead of putting the expected transition on the Image recognition Signal at the same time as the CaptureImage event (as is currently done), delay the expected transition after the CaptureImage event.
1.First determine the maximum image processing delay. Use a TestCase that uses the maximum-sized region of interest. Run the test several times and observe the delay in Image recognition signal after the image is captured. Take the maximum reading observed and note this delay-value. For all TestCases, extend the no check DataBlock using this delay value.
2.In your TestCase, right-click on the DataBlock which is disabled for Pass/Fail checking. Then select DataBlock -> Properties.
Add the delay value to the original Duration value in General tab of the DataBlock properties to extend the DataBlock. The Pass/Fail checking for Image recognition starts once the Image recognition results are available.