AI Applications of Inference Cameras
"Insights from Industry" about inference cameras and their role in AI applications
The application possibilities based on intelligent cameras are almost limitless. The technology is transforming virtually every industry - and is no longer just a topic for image processing experts. Product Sales Specialist Rob Webb talked to AZoOptics about the latest developments and their impact on the food industry. You can read an excerpt from the "Insights from Industry" interview here.
What would you say the IDS NXT Experience Kit offers the food processing industry that they are currently not being offered by other companies?
Food being an organic product by nature can be difficult to characterize by a fixed set of rules. So on one side the AI-based image processing can handle product variation and it can be used to identify products, or defects in products, for sorting applications and quality control. Where the IDS NXT Experience Kit really benefits the user is the ease of use. Most food processing facilities will likely have some automation experts who are great at machine control and programming PLCs but image processing is a specialist field. IDS NXT Experience Kit allows automation engineers to quickly and easily set up a solution without having to learn about image processing or bring in a third party to handle this element. They in essence become self-sufficient in the complete processing chain.
Costs are of course an important aspect of any process and the cost of implementation and use for an IDS NXT solution being relatively low means the return on investment can be very fast. If a human operative working on a quality assurance task can be re-assigned to another area where automation is not so applicable the ROI can be measured in months.
How important is it that industries such as agritech and food processing incorporate technology such as AI and edge processing?
We hear a lot of reports about access to labor becoming extremely challenging following the UK’s withdrawal from the EU and during the current Covid pandemic and even beyond these significant challenges the cost of labor is increasing so with low margin industries like agriculture and food processing automation can address these challenges. And whilst artificial intelligence is not applicable to every application the variable nature of products in these industries and challenging working environments clearly benefit from AI-based image processing and edge devices.
During the development of your IDS NXT cameras, what were some of the biggest challenges you came up against, and how were these overcome?
The main challenge is being able to handle AI-based image processing in an edge device. AI processing can be pretty intensive since neural networks can sometimes have millions of parameters and this means processing and memory resources must be carefully managed. Also, a typical embedded CPU doesn’t really have the necessary oomph to handle AI processing at a usable rate. To combat this IDS has developed an FPGA AI accelerator core and worked hard to optimize the neural networks created with IDS lighthouse to ensure that AI processing runs efficiently on the IDS NXT camera platform.
What are some of the advantages of IDS NXT taking an app-based approach?
The original concept of these intelligent cameras was based on how we use smartphones. When we buy a new smartphone it is has a number of default apps provided by the manufacturer and then we customize it to our needs by downloading other apps. With the IDS NXT cameras, we provide a number of default apps, or Vision Apps, which cover AI-based image processing and interfacing to wider systems via the means of OPC UA, REST api, serial interface, and general-purpose digital outputs. But with the Vision App Creator, the scope for what is possible is wide open so if a customer develops their own Vision App it can turn the IDS NXT camera into a very specific solution for a vertical market. We have a customer in Germany for example that has written their own pose estimation algorithm that runs on the cameras to provide co-ordinates for picking applications. On top of this, they have also written an interface which means the camera will communicate directly with a robot from Universal Robots.
For more insights into this topic read the complete interview on the AZo website (english)