Machine vision systems in 2026: a buyer's guide to the main architectures

A traditional machine vision system costs between 20,000 and 80,000 euros per inspection station, and that figure does not count the integrator time or the changeover downtime it adds to your line. The architecture you choose, not the brand, decides whether that money solves your problem or buys a system that breaks the first time a product variant changes.
Most guides on machine vision systems start with a 15-paragraph history lesson on CCD sensors. We will skip that. If you manage a production line and you have a quality problem that a human inspector cannot keep up with, you need to understand three axes of choice and roughly what each one costs. The rest is detail your integrator will sort out.
What counts as a machine vision system
A machine vision system is the full stack that converts light bouncing off a product into an accept or reject decision. That stack has four layers. A camera captures the image. Lighting makes the defect visible. Software analyses the image and outputs a verdict. A trigger and an output route that verdict back into your PLC or reject mechanism.
Anything simpler than that (for example, a laser distance sensor or a photoelectric beam break) is a presence sensor, not a vision system. Anything bigger (a full inline quality station with robotics and rejection gates) is still, at its core, a vision system wrapped in more hardware.
For a deeper dive into the individual components, cameras, lenses, lighting and software vendors, see our companion piece on industrial image processing. This article is about how those components get assembled into a working system, and which assembly fits which production problem.
Axis 1: rule-based versus AI-based
The oldest split in machine vision is between rule-based systems and AI-based systems. Cognex, Keyence and every classic library (Halcon, OpenCV, VisionPro) started in the rule-based world. The system is programmed to look for specific features. A hole should be 4.2 millimetres in diameter. A logo should sit 12 millimetres from the left edge. A surface should be uniformly grey with a standard deviation below a threshold.
Rule-based works beautifully when your product is consistent, your defects are geometrically defined and your lighting is locked down. It breaks the moment reality gets messy. A different batch of raw material, a new product variant, a shift in ambient light from the skylight above the line, and suddenly your false reject rate doubles overnight.
AI-based systems flip the logic. Instead of programming what a good part looks like, you show the system examples of good parts and it learns a statistical model of normality. Anything that deviates gets flagged. The approach is called anomaly detection, and we wrote about it in detail here.
The practical difference is what happens when your production changes. A rule-based system needs re-programming by an integrator, which typically means a change order and three to six weeks. An AI-based system needs fresh reference images, which a line operator can collect in an hour. For a plant that runs more than three product variants a year, that difference compounds fast.
Axis 2: single-camera versus multi-camera
The second axis is how many angles you need. A single-camera system is the default for flat or cylindrical products inspected from one face. Labels on bottles. Surface defects on sheet metal. Print quality on cartons. One camera, one lens, one lighting setup, one decision.
Multi-camera systems come into play when defects can occur on any face of a three-dimensional part. A machined aluminium housing might need four cameras around it to catch scratches on every side. An injection-moulded part with transparent and opaque regions might need two cameras with different lighting angles firing in sequence.
Multi-camera roughly doubles to quadruples your hardware and software cost. It also multiplies the synchronisation complexity. If camera 1 sees the part at timestamp T and camera 3 sees it at T + 80 milliseconds, your software has to stitch both frames to the same part ID. Classic systems do this with PLC-triggered encoders. AI systems do it with per-camera inference and a shared reject logic layer.
Rule of thumb: start with single-camera. Go multi-camera only when a defect audit shows that more than 15 percent of your escapes happen on faces your single camera cannot see.
Axis 3: fixed-line versus fleet-based
The third axis is the newest and the one most guides still ignore. Traditionally every inspection station has been fixed-line. A camera on a rigid mount, a ring light, a sealed enclosure, cabled to a controller in a cabinet. Installation takes two to four weeks. Commissioning takes another two. The station cannot be moved without re-commissioning.
Fleet-based inspection is the mobile alternative that has become practical in the last two years, driven by small form-factor sensors (modern smartphones are now the highest-resolution industrial cameras most factories can afford) and on-device AI. A fleet-based system is a set of portable inspection devices that any operator can pick up, place in front of the line and use to run a random-sample or 100-percent check.
This matters for three reasons. First, you pay per inspection task, not per camera bolted to a frame, so adding a new inspection point is a per-shift decision rather than a capex project. Second, the same hardware can inspect three different product lines on Monday, Wednesday and Friday if their takt time allows. Third, the inspection can move with the product: into a pre-packing station, onto a trolley at the end of a bottleneck, into a quality lab for deeper sampling.
At Enao we focus on exactly this category. A fleet-based setup using an iPhone and an 80-euro ring light replaces an 80,000-euro fixed station for a useful subset of inspection tasks, especially where volumes or variants make a fixed station unjustifiable.
When each type makes sense
The three axes give you eight combinations. In practice five of them cover almost every inspection problem in discrete manufacturing. The table below maps line patterns to the architecture that fits.
Line pattern | Rule vs AI | Single vs multi | Fixed vs fleet | Typical capex per station |
|---|---|---|---|---|
High-volume single SKU, tight spec, stable lighting | Rule | Single | Fixed | 20,000–40,000 EUR |
High-mix low-volume, 5+ variants per week | AI | Single or multi | Fleet | 2,000–10,000 EUR |
Complex 3D part with defects on multiple faces | AI or rule | Multi | Fixed | 60,000–120,000 EUR |
Random sampling across several lines | AI | Single | Fleet | 2,000–5,000 EUR per device |
New line, unknown defect types, need to iterate | AI | Single | Fleet, migrate to fixed | 2,000 EUR to start |
The last row is the one most buyers get wrong. They specify a fixed-line multi-camera rule-based system for a line where nobody yet knows what the defect catalogue looks like. Six months later they own a 90,000-euro system that catches three of the seven defects that actually matter. Starting fleet-based for the first year and migrating to a fixed station once the defect catalogue stabilises usually saves two-thirds of the lifetime cost.
For the finance side of this trade-off, we walked through the capex-versus-opex calculation in a separate piece.
How to shortlist without regret
Three questions cut most shortlists in half.
First, how many variants does the system need to handle in its first year of life? If the answer is more than three, rule-based is almost certainly the wrong choice regardless of how low your per-part price is.
Second, what happens if the defect catalogue changes? Ask the vendor for the exact process and timeline to add a new defect class after go-live. A good answer is measured in hours and can be done by a line operator. A bad answer is measured in weeks and requires a site visit.
Third, what is the total cost of ownership across three years, not the list price? A fixed-line rule-based system at 40,000 euros list often costs 120,000 euros across three years once you count integration, re-programming for product changes and the maintenance contract. A fleet-based AI system at 500 euros per device per month is 18,000 euros across three years and covers updates.
Two external references are worth reading if you want to go deeper on the component choices. Cognex has a solid overview of system types organised by sensor and camera category. Teledyne's machine vision 101 covers the optical fundamentals. Both are useful once you have decided your architecture on the three axes above.
Getting started
If you are evaluating machine vision systems right now, the fastest way to learn what fits your line is to run a two-week pilot on one inspection task. Pick the defect that causes the most complaints, gather 200 reference images of good parts and see whether an AI system can flag the bad ones without being told what to look for.
A fleet-based starter kit costs under 2,000 euros to try. A fixed-line classic system costs 60,000 euros just to get to a quote. The experiment is cheaper than the RFP.
For a curated shortlist of AI-based vendors serving this space, see our 2026 ranking of AI machine vision systems for quality control. If you want to compare notes with other plant managers and quality leads already using these systems, join our community to see how teams are shipping inspection in days rather than quarters.