guides

    What is anomaly detection in manufacturing?

    Korbinian Kuusisto, CEO and founder of Enao Vision
    Korbinian KuusistoCEO & Founder, Enao Vision
    March 5, 2026
    Share:
    What is anomaly detection in manufacturing?

    Anomaly detection in manufacturing is a machine learning approach that learns what good parts look like and flags anything outside that distribution. It is the inverse of defect detection, which learns what bad parts look like from labeled examples. Anomaly detection wins when the defect set is open, rare, or still being discovered. Industry surveys, including Gartner, put anomaly detection adoption at 18% versus 41% for defect detection. The gap is not technology. It is how the two get confused on shop floors.

    Anomaly detection and defect detection get used interchangeably in vendor demos. They are not the same model, and they do not fit the same problem. Picking the wrong one on Day 1 is the fastest way to end a pilot at Day 30.

    How does anomaly detection differ from defect detection?

    If you have a closed list of defect classes and enough labeled examples of each, defect detection wins. If your defect set is open, rare, or still being discovered, anomaly detection wins. We have a longer anomaly vs defect detection breakdown that walks through the tradeoffs in detail.

    Why do anomaly models fit cosmetic and surface QC?

    Anomaly detection thrives on three conditions. A well-defined good state, enough example images of that good state, and tolerance for false positives early on.

    Cosmetic and surface QC fit this perfectly. A painted body panel has one good state, hundreds of possible defect variations, and a workflow that can tolerate a quality engineer reviewing the flags.

    Where it breaks: lines with very high normal variance. A food line that accepts 40 label artworks as all normal will confuse the model into calling everything an anomaly. The fix is to split the data by SKU or to move to defect detection once enough flagged examples have been collected.

    How does a model learn normal without labeled defects?

    The training loop is simpler than people expect. Show the model 500 to 5,000 images of parts that passed end-of-line QC. The model builds an internal representation of what normal looks like. At inference, any image whose features sit too far from the learned distribution gets flagged.

    Two practical points stand out. First, shift handling matters: if lighting changes between day and night shifts, capture training data across both. Second, data volume beats data quality early. Five thousand phone-captured images beat 500 carefully framed industrial-camera shots in almost every anomaly deployment we have seen.

    What does anomaly detection look like on a real shop floor?

    Paint runs on a powder-coat line. Normal is a uniform finish. Anomaly detection flags any patch where the gradient departs from the trained baseline. One Enao customer catches drips and orange-peel texture on parts that used to pass manual QC under 1.2 second exposure.

    Mis-printed labels on pharma blister packs. Normal is the sharp print of the approved artwork. The anomaly model flags scuffed print, skipped lines, or foil wrinkling without being told what a scuff, skip, or wrinkle looks like.

    Weld spatter and splash on automotive body panels. Normal is the polished raw panel surface. The anomaly model catches isolated spatter that a fixed-rule system would miss because no one wrote a rule for a defect shape that is essentially random.

    How do you build an anomaly-to-defect pipeline?

    Treat anomaly detection as the Day 1 model. Ship it to production once it holds a reasonable false-positive rate. Then use the flagged items to build a labeled set for a defect detection model. That is the Day 30 model.

    The combined workflow is what we see work on most deployments. Anomaly for broad coverage from the first day of operation. Defect detection for the top five or ten classes that actually need fast, high-precision calls.

    For the broader picture, see our pillar on machine vision inspection. If you want concrete examples of what AI catches that humans miss, this round-up on defect types is the closest thing we have published to a reference list.

    At Enao Vision, anomaly detection is the typical Day 1 deployment. A line lead records two to four hours of video on an iPhone mounted over the station, we train the first anomaly model overnight, and the line runs in production on the next shift. The anomaly flags go into a review queue; whatever gets confirmed becomes training data for the Day 30 defect model. Model tuning tips get shared in our community Slack.

    The adoption gap between anomaly detection and defect detection is not going to close with better marketing. It will close when teams stop treating anomaly as a simpler version of defect detection, and start using it for the problems it was built for.

    Scoping an anomaly-detection or defect-detection pilot? Compare approaches with other teams in our community.

    Frequently asked questions about anomaly detection in manufacturing

    What is the one-sentence difference between anomaly detection and defect detection?

    Defect detection learns what bad parts look like from labeled examples. Anomaly detection learns what good parts look like and flags anything that does not fit that distribution. Defect detection wins when you have a closed list of defect classes with enough labeled examples. Anomaly detection wins when your defect set is open, rare, or still being discovered.

    How many images do you need to train an anomaly detection model?

    Five hundred to five thousand images of parts that passed end-of-line QC are typically enough for a working anomaly detection model. The model builds a representation of what normal looks like. At inference, any image whose features sit too far from that distribution gets flagged. Capture training data across all shifts and lighting conditions, especially day-to-night transitions.

    When does anomaly detection fail?

    Anomaly detection breaks on lines with very high normal variance. A food line that accepts forty different label artworks as all normal will confuse the model into calling everything an anomaly. The fix is to split the data by SKU, or move to defect detection once enough flagged examples have been collected to build a labeled set.

    Should you start with anomaly detection or defect detection?

    Treat anomaly detection as the Day 1 model on most lines. Ship it once it holds a reasonable false-positive rate. Use the flagged items to build a labeled set for a defect detection model, which becomes the Day 30 model. Anomaly for broad coverage from the first day, defect detection for the top five or ten classes that need fast, high-precision calls.

    Key takeaways

    • Anomaly detection learns the good state and flags anything outside it; defect detection learns labeled bad states and matches them.
    • Anomaly detection wins when defects are open, rare, or still being discovered. Defect detection wins on closed, well-labeled defect classes.
    • 500 to 5,000 images of known-good parts are typically enough to train a working anomaly model that handles cosmetic and surface QC.
    • Anomaly models break on lines with very high normal variance, like 40-SKU food labels. Split by SKU or graduate to defect detection.
    • Most lines start with anomaly detection on Day 1 and add a defect detection model by Day 30, using flagged examples as labeled data.

    Explore with AI

    Discuss this article with your favorite AI assistant

    Korbinian Kuusisto, CEO and founder of Enao Vision

    Escrito por

    Korbinian Kuusisto

    CEO & Founder, Enao Vision

    We value your privacy

    We use cookies to understand how visitors use our site so we can improve it. Analytics only run if you accept. You can change your choice anytime in the footer. Privacy Policy.