Skip to content
Technical Article

Artificial Vision Technology and Deep Learning


The production industry is constantly facing the challenge of improving the efficiency and quality of its products. In this blog article, we discuss how a leading company in connectivity solutions and sensors overcame obstacles in the production of plastic parts through a customized plastic injection molding solution, integrating advanced artificial vision technologies.

Challenge: Increasing Efficiency in the Production of Plastic Parts

The urgent need to decrease the time to market for plastic parts and minimize downtimes due to detected defects led the client to seek an innovative and tailor-made solution. In this article, we will focus on the artificial vision technology used, a key part for the desired performance of the customized injection machine.

Integration of Artificial Vision with Deep Learning

The designed machine contains two parallel lines, fed by 2 rolls of metal bands, which transport the plastic parts through the injection process. The goal is to quickly detect any defects in the plastic parts, to reduce downtime. For this purpose, Cognex Deep Learning technology was used, which proved crucial for the project’s success.




Metal Band Detail of the Metal Bands that will transport the plastic parts

Phases of the Machine

Feeder: Beginning of the process with the introduction of the metal bands.
Pre-processing: Alignment and preparation of the bands for injection.
Injection: Where the plastic is injected to form the parts.
Post-processing: Checking for potential defects.
Winder: Collection of the metal bands with the finished plastic parts.






Tecnologies and Equipment used

To tackle this challenge, an innovative approach was chosen using 4 Cognex Cameras, with notable use of:


IS3805M Cameras


These cameras were used for precise alignment of metal strips, essential for the accuracy of the injection process. One camera was placed at the entry of the pre-processing and another at the entry of the post-processing (immediately after the injector).




Alignment is ensured by measuring the distance between a specific line on the strip (found on the periphery of the pattern represented in green) and a fixed reference point (central circle represented in red).



During the process, the strips continuously move, while the control system actively searches for a specific pattern (indicated in green on the belt). This procedure aims to position a red horizontal line. When the distance between this line and the reference point aligns with the parameters set by the operator on the HMI, it pauses the movement. From this moment, it is possible to start the injection process or perform quality verification, varying according to the workstation configuration. After completing the process in question, the actuators are deactivated, allowing the belt to resume its movement. This cycle repeats continuously, ensuring efficiency and accuracy in the production line.


2 ISD905M Deep Learning Cameras




These cameras were used for detailed inspection of the injection process, allowing simultaneous analysis of 8 samples per camera. This model was chosen for its efficiency in processing multiple samples simultaneously with just one algorithm, significantly speeding up the verification process.


The main advantage of this approach lies in the significant reduction of programming time, as it simplifies the training process. Instead of developing eight distinct Edge Learning algorithms for each type of piece individually, a single Deep Learning algorithm was implemented. This algorithm was adeptly applied eight times to accurately identify each injected piece. This approach not only accelerated development but also optimized the detection process, demonstrating the effectiveness of Deep Learning in industrial automation applications.


The performance and efficiency of algorithm training are directly related to the computing power used. Specifically for this project, each algorithm iteration required an approximate wait of 15 minutes for the generation of a new neural network.


In-Sight Vision Suite


The configuration of the Cognex Cameras was made possible thanks to the In-Sight Vision Suite. This Cognex platform is ideal for building advanced and highly customized applications. With a robust design, it offers users the flexibility to adjust essential project parameters without the need for programming. It facilitates quick setup and execution of tasks, allowing continuous adaptation of applications as needed.


The platform evolves through the integration of customized ‘feature packs’, meeting specific requirements. In this particular case, it was necessary to include the In-Sight ViDi 1.8.0 package, specific for Deep Learning, and specialized in facilitating the application of artificial intelligence techniques and neural networks in image processing.




The menus available in the In-Sight Vision Suite play a crucial role in the management and operation of the cameras, offering a direct interface to control critical functionalities. With these tools, it is possible to perform a series of important actions, such as configuring or modifying the camera’s IP address, performing backups and restorations of the installed software, adjusting the initial operation mode (either online for direct operation or in programming mode for offline adjustments), executing a factory reset to return to the original settings, and activating various industrial communication protocols. These functionalities ensure that users can customize and maintain the cameras according to the specific needs of each application, ensuring maximum efficiency and adaptability in the industrial environment.


The In-Sight Vision Suite platform from Cognex offers two distinct work environments: the EasyBuilder, ideal for beginners who prefer a more accessible interface, and the SpreadSheet, for advanced users seeking greater flexibility and more complex and detailed configuration. It is important to note that compatibility with these environments varies according to the camera model: while the IS3805M model allows the use of both EasyBuilder and SpreadSheet, the ISD905M model operates exclusively with the SpreadSheet.


In the execution of this project, the SpreadSheet environment was chosen, a choice that ensures remarkable flexibility due to its wide range of tools and parametrization options. Its interface, intuitive and quite similar to Excel, facilitates user adaptation and productivity.


Parametrization on the In-Sight Vision Suite platform


Deep Learning

Deep Learning is an advanced AI-based software solution for image analysis in complex applications. This technology stands out for its ability to automate challenging tasks that would be time-consuming or complicated to program with rule-based algorithms, offering an efficient alternative to manual inspection. The tool is designed to handle natural variations in processes, differentiating acceptable anomalies from unacceptable ones, which is crucial for the development of applications with high variability.

To meet the specific needs of various tasks, Cognex segmented the Deep Learning tools into four main categories:

Each of these tools is optimized to maximize performance and efficiency in its respective application area.

Furthermore, Deep Learning significantly simplifies the process of developing computer vision applications, from labeling and training to deployment, thanks to features such as label verification, automatic parameter adjustment, and quick duplication of lines. These characteristics allow for rapid optimization of applications, reducing the time needed for training and validation, and facilitating the scalability of operations. The seamless integration with other Cognex products and software ensures compatibility and allows an effective introduction of the latest artificial vision technologies, without additional engineering costs.


This article highlights the transformative role of Deep Learning technology in the manufacturing industry, particularly in improving manufacturing processes and ensuring product quality. The project’s success, marked by significant reductions in downtime and efficient defect detection, underscores the ability of Deep Learning to interpret and analyze complex visual data in real time. This technology not only simplified the training process, eliminating the need to develop multiple specific algorithms, but also optimized piece recognition and classification, demonstrating remarkable flexibility and effectiveness.


Moreover, the use of the In-Sight Vision Suite for programming and parametrization of computer vision systems reflects the growing trend of integrating intuitive interfaces and accessible development tools, facilitating the implementation of artificial intelligence and machine learning solutions in industrial contexts.


As the industry advances, adopting this and other artificial intelligence technologies becomes a key element in maintaining competitiveness, improving operational efficiency, and achieving product quality excellence. This practical example demonstrates that, with the proper integration of technology, companies can overcome traditional limitations and open new pathways for growth and innovation. Contact us and discover how we can optimize your processes!

Join our community!

"*" indicates required fields