guides

    What is anomaly detection in manufacturing?

    Korbinian Kuusisto
    March 5, 2026
    Share:
    What is anomaly detection in manufacturing?

    Gartner's 2025 industrial AI survey put anomaly detection adoption at 18 percent among SMEs. For defect detection in general it was 41 percent. The gap is not about the technology being any harder. It is about how the two are confused on shop floors.

    Anomaly detection and defect detection get used interchangeably in vendor demos. They are not the same model, and they do not fit the same problem. Picking the wrong one on Day 1 is the fastest way to end a pilot at Day 30.

    Anomaly detection vs defect detection: the one-sentence difference

    Defect detection learns what bad parts look like. Anomaly detection learns what good parts look like and flags anything that is not that.

    That sentence is the whole category. Everything else follows from it.

    If you have a closed list of defect classes and enough labeled examples of each, defect detection wins. If your defect set is open, rare, or still being discovered, anomaly detection wins. We have a longer anomaly vs defect detection breakdown that walks through the tradeoffs in detail.

    Why anomaly models fit cosmetic and surface QC, and where they break

    Anomaly detection thrives on three conditions. A well-defined good state, enough example images of that good state, and tolerance for false positives early on.

    Cosmetic and surface QC fit this perfectly. A painted body panel has one good state, hundreds of possible defect variations, and a workflow that can tolerate a quality engineer reviewing the flags.

    Where it breaks: lines with very high normal variance. A food line that accepts 40 label artworks as all normal will confuse the model into calling everything an anomaly. The fix is to split the data by SKU or to move to defect detection once enough flagged examples have been collected.

    How a model learns normal without labeled defects

    The training loop is simpler than people expect. Show the model 500 to 5,000 images of parts that passed end-of-line QC. The model builds an internal representation of what normal looks like. At inference, any image whose features sit too far from the learned distribution gets flagged.

    Two practical points. First, shift handling matters. If lighting changes between day and night shifts, capture training data across both. Second, data volume beats data quality early. Five thousand phone-captured images beat 500 carefully framed industrial-camera shots in almost every anomaly deployment we have seen.

    Real shop-floor examples

    Paint runs on a powder-coat line. Normal is a uniform finish. Anomaly detection flags any patch where the gradient departs from the trained baseline. One Enao customer catches drips and orange-peel texture on parts that used to pass manual QC under 1.2 second exposure.

    Mis-printed labels on pharma blister packs. Normal is the sharp print of the approved artwork. The anomaly model flags scuffed print, skipped lines, or foil wrinkling without being told what a scuff, skip, or wrinkle looks like.

    Weld spatter and splash on automotive body panels. Normal is the polished raw panel surface. The anomaly model catches isolated spatter that a fixed-rule system would miss because no one wrote a rule for a defect shape that is essentially random.

    Building the anomaly-to-defect pipeline

    Treat anomaly detection as the Day 1 model. Ship it to production once it holds a reasonable false-positive rate. Then use the flagged items to build a labeled set for a defect detection model. That is the Day 30 model.

    The combined workflow is what we see work on most deployments. Anomaly for broad coverage from the first day of operation. Defect detection for the top five or ten classes that actually need fast, high-precision calls.

    For the broader picture, see our pillar on machine vision inspection. If you want concrete examples of what AI catches that humans miss, this round-up on defect types is the closest thing we have published to a reference list.

    At Enao Vision, anomaly detection is the typical Day 1 deployment. A line lead records two to four hours of video on an iPhone mounted over the station, we train the first anomaly model overnight, and the line runs in production on the next shift. The anomaly flags go into a review queue; whatever gets confirmed becomes training data for the Day 30 defect model. Model tuning tips get shared in our community Slack.

    The adoption gap between anomaly detection and defect detection is not going to close with better marketing. It will close when teams stop treating anomaly as a simpler version of defect detection, and start using it for the problems it was built for.

    Explore with AI

    Discuss this article with your favorite AI assistant

    Written by

    Korbinian Kuusisto