comparisons

    Machine vision systems in 2026: a buyer's guide to the main architectures

    Korbinian Kuusisto, CEO and founder of Enao Vision
    Korbinian KuusistoCEO & Founder, Enao Vision
    April 1, 2026
    Share:
    Machine vision systems in 2026: a buyer's guide to the main architectures

    A machine vision system converts light bouncing off a product into an accept-or-reject decision. The full stack has four layers: a camera captures the image, lighting makes the defect visible, software analyses the image and outputs a verdict, and a trigger routes that verdict back into your PLC. Traditional fixed-line systems cost 20,000 to 80,000 euros per inspection station, plus integrator time and changeover downtime. The architecture you choose, not the brand, decides whether that money solves your problem.

    Most guides on machine vision systems start with a 15-paragraph history lesson on CCD sensors. We will skip that. If you manage a production line and you have a quality problem that a human inspector cannot keep up with, you need to understand three axes of choice and roughly what each one costs. The rest is detail your integrator will sort out.

    What counts as a machine vision system?

    The full stack works as one loop: the camera captures the image, lighting makes the defect visible, software analyses the image and outputs a verdict, and a trigger plus output route that verdict back into your PLC or reject mechanism. Each layer trades off against the others. Cheap lighting forces more expensive software. A faster camera can simplify the trigger logic. The system only works as well as its weakest layer, which is why most failed installations turn out to be a lighting problem dressed up as a software problem.

    Anything simpler than that (for example, a laser distance sensor or a photoelectric beam break) is a presence sensor, not a vision system. Anything bigger (a full inline quality station with robotics and rejection gates) is still, at its core, a vision system wrapped in more hardware.

    For a deeper dive into the individual components, cameras, lenses, lighting and software vendors, see our companion piece on industrial image processing. This article is about how those components get assembled into a working system, and which assembly fits which production problem.

    How does rule-based machine vision compare to AI-based machine vision?

    The oldest split in machine vision is between rule-based systems and AI-based systems. Cognex, Keyence and every classic library (Halcon, OpenCV, VisionPro) started in the rule-based world. The system is programmed to look for specific features. A hole should be 4.2 millimetres in diameter. A logo should sit 12 millimetres from the left edge. A surface should be uniformly grey with a standard deviation below a threshold.

    Rule-based works beautifully when your product is consistent, your defects are geometrically defined and your lighting is locked down. It breaks the moment reality gets messy. A different batch of raw material, a new product variant, a shift in ambient light from the skylight above the line, and suddenly your false reject rate doubles overnight.

    AI-based systems flip the logic. The approach works in two stages. You start by showing the model examples of good parts so it can flag anything that looks unusual, which surfaces candidate defects without anyone labeling them first. Then you label those defects, group them into types and train supervised detection models that classify each one. That second step is what makes the approach robust in production, with high precision and an actionable verdict on every part rather than a plain pass-fail signal. We broke down this trade-off in our guide on anomaly detection versus defect detection.

    The practical difference is what happens when your production changes. A rule-based system needs re-programming by an integrator, which typically means a change order and three to six weeks. An AI-based system needs fresh reference images, which a line operator can collect in an hour. For a plant that runs more than three product variants a year, that difference compounds fast.

    When do you need multiple cameras in a machine vision system?

    The second axis is how many angles you need. A single-camera system is the default for flat or cylindrical products inspected from one face. Labels on bottles. Surface defects on sheet metal. Print quality on cartons. One camera, one lens, one lighting setup, one decision.

    Multi-camera systems come into play when defects can occur on any face of a three-dimensional part. A machined aluminium housing might need four cameras around it to catch scratches on every side. An injection-moulded part with transparent and opaque regions might need two cameras with different lighting angles firing in sequence.

    Multi-camera roughly doubles to quadruples your hardware and software cost. It also multiplies the synchronisation complexity. If camera 1 sees the part at timestamp T and camera 3 sees it at T + 80 milliseconds, your software has to stitch both frames to the same part ID. Classic systems do this with PLC-triggered encoders. AI systems do it with per-camera inference and a shared reject logic layer.

    Rule of thumb: start with single-camera. Go multi-camera only when a defect audit shows that more than 15 percent of your escapes happen on faces your single camera cannot see.

    What's the difference between fixed-line and fleet-based machine vision?

    The third axis is the newest and the one most guides still ignore. Traditionally every inspection station has been fixed-line. A camera on a rigid mount, a ring light, a sealed enclosure, cabled to a controller in a cabinet. Installation takes two to four weeks. Commissioning takes another two. The station cannot be moved without re-commissioning.

    Fleet-based inspection is the mobile alternative that has become practical in the last two years, driven by small form-factor sensors (modern smartphones are now the highest-resolution industrial cameras most factories can afford) and on-device AI. A fleet-based system is a set of portable inspection devices that any operator can pick up, place in front of the line and use to run a random-sample or 100-percent check.

    This matters for three reasons. First, you pay per inspection task, not per camera bolted to a frame, so adding a new inspection point is a per-shift decision rather than a capex project. Second, the same hardware can inspect three different product lines on Monday, Wednesday and Friday if their takt time allows. Third, the inspection can move with the product: into a pre-packing station, onto a trolley at the end of a bottleneck, into a quality lab for deeper sampling.

    The mounting setup is what makes this practical in a real plant. With SP Connect bases mounted on every line, an iPhone clicks in and out in seconds and the app automatically reconnects to the right inspection site. You only own as many iPhones as you have lines running at once, not one per line. Five iPhones covering ten mounted lines is five licenses instead of ten, and no capex sitting idle on a line that is not producing this shift.

    At Enao we focus on exactly this category. A fleet-based setup using an iPhone and an 80-euro ring light replaces an 80,000-euro fixed station for a useful subset of inspection tasks, especially where volumes or variants make a fixed station unjustifiable.

    Which machine vision architecture fits which production line?

    The three axes give you eight combinations. In practice five of them cover almost every inspection problem in discrete manufacturing. The table below maps line patterns to the architecture that fits.

    Line patternRule vs AISingle vs multiFixed vs fleetTypical capex per station
    High-volume single SKU, tight spec, stable lightingRuleSingleFixed20,000–40,000 EUR
    High-mix low-volume, 5+ variants per weekAISingle or multiFleet2,000–10,000 EUR
    Complex 3D part with defects on multiple facesAI or ruleMultiFixed60,000–120,000 EUR
    Random sampling across several linesAISingleFleet2,000–5,000 EUR per device
    New line, unknown defect types, need to iterateAISingleFleet, migrate to fixed2,000 EUR to start

    The last row is the one most buyers get wrong. They specify a fixed-line multi-camera rule-based system for a line where nobody yet knows what the defect catalogue looks like. Six months later they own a 90,000-euro system that catches three of the seven defects that actually matter. Starting fleet-based for the first year and migrating to a fixed station once the defect catalogue stabilises usually saves two-thirds of the lifetime cost.

    For the finance side of this trade-off, we walked through the capex-versus-opex calculation in a separate piece.

    How do you shortlist machine vision systems without regret?

    Three questions cut most shortlists in half.

    First, how many variants does the system need to handle in its first year of life? If the answer is more than three, rule-based is almost certainly the wrong choice regardless of how low your per-part price is.

    Second, what happens if the defect catalogue changes? Ask the vendor for the exact process and timeline to add a new defect class after go-live. A good answer is measured in hours and can be done by a line operator. A bad answer is measured in weeks and requires a site visit.

    Third, what is the total cost of ownership across three years, not the list price? A fixed-line rule-based system at 40,000 euros list often costs 120,000 euros across three years once you count integration, re-programming for product changes and the maintenance contract. A fleet-based AI system at 500 euros per device per month is 18,000 euros across three years and covers updates.

    Two external references are worth reading if you want to go deeper on the component choices. Cognex has a solid overview of system types organised by sensor and camera category. Teledyne's machine vision 101 covers the optical fundamentals. Both are useful once you have decided your architecture on the three axes above.

    How do you get started with machine vision systems?

    If you are evaluating machine vision systems right now, the fastest way to learn what fits your line is to run a two-week pilot on one inspection task. Pick the defect that causes the most complaints, gather 200 reference images of good parts and see whether an AI system can flag the bad ones without being told what to look for.

    A fleet-based iPhone pilot costs under €1,000 in hardware to try. You need a refurbished iPhone, a lamp, cables and a mount. A fixed-line classic system costs 60,000 euros just to get to a quote. The experiment is cheaper than the RFP.

    For a curated shortlist of AI-based vendors serving this space, see our 2026 ranking of AI machine vision systems for quality control. If you want to compare notes with other plant managers and quality leads already using these systems, join our community to see how teams are shipping inspection in days rather than quarters.

    Frequently asked questions

    How accurate is a machine vision system on a production line?

    Day-one accuracy on a well-defined defect lands at 80 to 90 percent for AI systems and 90 to 99 percent for rule-based systems on simple binary checks. After feedback loops on production data, AI accuracy climbs to 95 to 99 percent, while rule-based accuracy stays where it started but breaks the moment products vary. The number you actually get depends on lighting, the size and quality of the training data, and how big the defect is relative to the sensor's pixels.

    How long does it take to install a machine vision system?

    Fixed-line traditional systems take four to eight weeks from purchase order to first inspection: two to four weeks for hardware shipping and installation, then two more weeks for commissioning and rule programming. Fleet-based AI systems run in days. You unbox an iPhone, click it into a mount, train a model on 200 reference images, and start inspecting. The trade-off is that fixed-line systems handle higher throughput once they are running, while fleet-based systems are easier to redeploy when product mix shifts.

    Can a machine vision system handle multiple product variants?

    AI-based systems handle variants well. You collect a few hundred new reference images for each variant and the model adapts in hours. Rule-based systems struggle with variants because each new product typically needs an integrator visit and a fresh round of programming. If your line runs more than three variants a year, factor that difference into your total cost of ownership before you sign the order.

    How much does a machine vision system cost in 2026?

    Fixed-line systems cost 20,000 to 80,000 euros per inspection station, plus integrator fees of 5,000 to 15,000 euros and an annual maintenance contract. Fleet-based AI systems running on iPhones come in at under 1,000 euros for hardware (refurbished iPhone, lamp, mount, cables) and a software subscription that typically runs 300 to 600 euros per device per month. Across three years, the architecture you choose has more impact on total cost than the brand or feature list.

    Key takeaways

    • A machine vision system has four layers (camera, lighting, software, trigger) and converts product images into accept-or-reject decisions in under a second.
    • Three architectural axes drive most decisions: rule-based versus AI, single-camera versus multi-camera, and fixed-line versus fleet-based.
    • AI-based systems handle product variants and changing defect catalogues without re-programming, which matters most when your line runs more than three variants per year.
    • Fleet-based inspection on iPhones replaces 80,000-euro fixed stations for surface, assembly and presence checks at a fraction of the lifetime cost.
    • Total cost of ownership across three years usually beats list price as the better decision metric: a fixed-line system at 40,000 euros list often costs 120,000 euros over three years.

    Explore with AI

    Discuss this article with your favorite AI assistant

    Korbinian Kuusisto, CEO and founder of Enao Vision

    작성자

    Korbinian Kuusisto

    CEO & Founder, Enao Vision

    We value your privacy

    We use cookies to understand how visitors use our site so we can improve it. Analytics only run if you accept. You can change your choice anytime in the footer. Privacy Policy.