best practices

    Why your machine vision system breaks on changeovers

    Korbinian Kuusisto
    April 17, 2026
    Share:
    Why your machine vision system breaks on changeovers

    A machine vision system that works well on a stable line with one SKU is a proof of concept. The same system that still works well on a line that changes SKUs four times a shift is a production system. The gap between the two is where most industrial-camera projects quietly die.

    This post is about that gap. What breaks on a changeover, why a classic rule-based vision system degrades faster than the camera itself, and what an ML-based inspection station does differently to stay useful across the SKU family.

    What actually changes at a changeover

    Five things shift when a line swaps SKUs: part geometry (shape, size, orientation), surface finish (matte vs. glossy, colour, reflectivity), labels and artwork (new graphics, new position, new font), lighting conditions (ambient changes as conveyor heights or trays change) and throughput rate (not every SKU runs at the same BPM). Each one can individually break a vision inspection that was stable five minutes ago.

    The important point: classic vision systems require a new set of rules per SKU. New region-of-interest, new threshold, new reference image, new tolerance. On a well-run line, the integrator has coded these ahead of time and the HMI lets an operator pick the recipe. On a less well-run line, every SKU change is an argument with the vision cabinet.

    Why lighting is the single biggest changeover risk

    Classic vision systems assume a tightly controlled lighting environment. The moment a SKU has a glossy foil finish instead of a matte carton, the reflections land in different pixels and the thresholds that worked yesterday now produce false rejects. Lighting drift happens slowly (bulb degradation, dust on the diffuser) and quickly (operator moves a light to see a stuck part, then forgets to move it back). Our lighting guide for AI visual inspection covers the physics in depth, but the production-line consequence is simple: the more SKUs a line runs, the more the lighting engineering has to cover every combination.

    The recipe-management trap

    The standard industrial-camera answer to changeover is a recipe manager: per SKU, store all the thresholds and parameters, load them when the SKU changes. This works until the number of SKUs grows past roughly fifty. After that, the recipe library becomes the problem. Who owns each recipe. Who is allowed to modify it. What happens when a recipe is out of date. Which SKU was running when the false-reject rate spiked. Every mature vision integrator has a horror story about a line that produced 40% false rejects for three days because the wrong recipe loaded.

    This is not a bug in a specific product. It is a structural consequence of encoding inspection knowledge as a large set of hand-tuned rules. The same approach in software engineering was abandoned in the 1990s for a reason.

    What ML-based inspection does differently

    A convolutional or transformer model trained on images from the full SKU family sees changeovers as shifts in input distribution. If the training set covers all the SKUs a line is likely to run, the model generalises across them without per-SKU recoding. New SKU in the family? Label a few hundred images, fine-tune for a shift, back in production.

    The operator-facing change is equally important. On a rule-based line, the operator's relationship with the vision system is adversarial: the system blocks the line, the operator has to convince it the part is fine. On an ML-based line, the relationship is collaborative: the operator labels the new part as good, and the system incorporates that label into the next training cycle. That pattern is core to what we describe in what AI visual inspection is.

    What still breaks on an ML-based line

    ML-based inspection is not a magic fix for changeovers. Three failure modes show up reliably. First, a genuinely new SKU not represented in the training set will produce unpredictable output. Second, a radical lighting change (new bulb colour temperature, new ambient ceiling light) still drifts the input distribution enough to degrade the model. Third, the training pipeline itself has to be fast. If retraining for a new SKU takes three weeks, the line runs without the model in the meantime.

    Enao Vision addresses all three explicitly. Retraining on a new SKU runs on the iPhone or via our cloud service in minutes, not weeks. The model is shipped with a confidence score so operators see when the model is outside its trained range. And the station writes every image and decision to the batch record, so drift is visible in the logs.

    A practical changeover protocol

    What a robust changeover protocol looks like on an ML-based line: when a new SKU is scheduled, an operator triggers the 'new SKU' workflow on the iPhone station. The station captures 100 to 200 good-part images during the first production run in shadow mode. A line lead reviews and labels any edge cases. The model fine-tunes overnight on the new data and comes online the next shift. In steady state, this adds 15 to 30 minutes of operator time per new SKU and removes the system integrator from the changeover critical path entirely.

    Compare that with a classic recipe-manager workflow, where a new SKU means a system integrator visit, half a day of re-tuning thresholds and typically two to four weeks of elevated false-reject rate while the new recipe is bedded in. This is the point where the cost math in our post on the real cost of machine vision becomes visible. The cost of a classic system is mostly changeover cost, not hardware cost.

    How to test this before you commit

    Three questions to ask any vision vendor who claims their system handles changeovers well. One: how many SKUs are currently running in the reference installation they show you, and how often does the line switch. Two: what is the measured false-reject rate on the second day after a changeover, before any tuning. Three: can the line supervisor train a new model without the vendor's involvement. If the answer to three is 'no', the system cannot handle a real changeover cadence on its own.

    If you want a framework for running these tests on your own line, join the Enao community. We share a changeover test protocol, a reference labelling workflow, and what 'good' looks like for false-reject and false-accept rates on an ML-based inspection station.

    Explore with AI

    Discuss this article with your favorite AI assistant

    Written by

    Korbinian Kuusisto