guides

    From anomaly detection to defect detection for manufacturing quality control: A quick guide

    Korbinian Kuusisto, CEO and founder of Enao Vision
    Korbinian KuusistoCEO & Founder, Enao Vision
    February 6, 2026
    Share:
    From anomaly detection to defect detection for manufacturing quality control: A quick guide

    Defect detection and anomaly detection are two AI machine vision approaches for manufacturing quality control. Defect detection uses supervised learning to classify known defect types (scratch, dent, missing component) with high accuracy and low false positives. Anomaly detection uses unsupervised learning to flag anything that looks unusual, no labels required, but with much higher false positive rates. This guide compares the two side by side and shows when each approach actually wins on the shop floor.

    Below: how AI models have shifted what's possible since 2010, the business value of automated quality inspection, when each approach fits, and the questions worth asking any vision provider.

    How have AI models improved since 2010?

    A labelled defect using an iPhone and AI-powered defect detection software

    A decade ago, big data and AI were already hot topics. But it wasn’t until ChatGPT hit the world stage that AI became accessible. This leap in AI capabilities is true in manufacturing today as well. In the early 2010s, when deep learning was new and labeled manufacturing data was scarce, unsupervised methods were often the only practical option due to time constraints. Now in 2026, AI now has:

    • Pre-trained models requiring minimal fine-tuning: Solutions can already achieve a high (80%) accuracy “out of the box” to get teams started
    • Active learning drastically reducing labeling needs: Instead of feeding 10,000-100,000 images, you may only need a few dozen or hundred to get started.
    • Transfer learning making small datasets viable: Models no longer work with isolated data sets and can make exponential improvements
    • Synthetic data generation for rare defects: Models can anticipate defects and reduce the need to wait for that 1/100,000 rare occurrence to be documented.

    The powerful AI models today make defect detection more accessible, accurate, and actionable than a decade ago. This enables teams to realistically test solutions on the shopfloor in a matter of hours to days, as opposed to spending months collecting data. With complex image data handling, supervised learning that provides defect detection is often more efficient than anomaly detection, which was basically a pass-fail approach that was realistic given AI limitations in the past.

    What business value does automated quality inspection deliver?

    Automation has a proven track record in manufacturing for everything from assembly to robots for assembly and sorting. So the question is how can AI be used for anomaly detection and quality inspection to fit lean production principles. Below are the values that automated vision inspection can provide to quality inspection:

    • Increased efficiency for production flows
    • Increased accuracy for quality inspection
    • Support for defect classification 
    • Continuous improvement loop for supervised learning models
    Tags for defect detection with machine vision systems
    Quality Control ScenarioAnomaly Detection ApproachDefect Detection Approach
    Surface scratch detectionFlags all surface variations including acceptable tooling marks (estimated 20% false positive/negative rate)Classifies by scratch depth, length, and location, then rejects based on threshold specs (estimated 2% false positive/negative rate)
    Assembly verificationDetects "something different" but can't specify what's missing/wrongIdentifies exactly which component is missing, misaligned, or incorrect with over 98% accuracy
    PCB solder joint inspectionFlags minor flux residue, normal component variation, lighting shadows (estimated 15% false positive rate)Distinguishes between cold joints (reject), acceptable joints (pass), and harmless flux (estimated 0.5% false positive rate)
    • Focuses human inspection on edge or complex cases

    The unsupervised learning approach for anomaly detection, which used to be the standard, leads these critical limitations:

    1. Low precision and high false positive/negative rates: Anomaly detection will flag non-critical deviations as defects, leading to unnecessary pseudo-rejects. This means that colour variation, shadows from light changes throughout the day, raw material with a different surface texture but the same performance can all be falsely rejected. In high-volume manufacturing, even 2-3% false positives can lead to hundreds of good parts being rejected, eroding trust in the system. On the other hand, if the systems are calibrated to ignore these non-critical deviations, the false negative rate increases drastically. This means that thousands of defect products may end up passing the quality control system, being flagged as good.
    2. Lack of actionable information: Anomaly detection simply focuses on flagging any deviations, but it does not give a user quick, actionable descriptions. 
    3. Constant calibration required: Anomaly detection systems might work “out of the box”, but need to be adjusted frequently so that it’s not too sensitive or lenient, or reflects the production conditions.
    Anomaly Detection SystemDefect Detection System
    Initial setup and calibration: 40 hours
    Ongoing false positive review: 2 hours/day × 260 days = 520 hours
    Recalibration events: 60 hours




    Total: 620 hours
    Initial setup and calibration: 100 hours
    Initial defect labeling (500 images): 80 hours
    Model training and validation: 20 hours
    Ongoing review of model mistakes: 30 minutes/day × 260 days = 130 hours
    Adding new defect types (quarterly): 40 hours
    Total: 270 hours
    1. Lack of a learning loop: Anomaly detection doesn't improve from operator feedback, unlike supervised learning models.

    In contrast, supervised learning as the primary method with today’s technology can:

    1. Supports smooth inspection flows: By feeding labelled defects, operators have control to train the model until it hits an acceptable false positive/negative rate (e.g. 1-2%)
    2. Provides detailed defect classification: Supervised learning enables operators to feed and label defects based on acceptability thresholds. You can train your model to describe the defect (e.g. scratch), size, location, severity, and so forth.
    3. Enables smart routing: Sends borderline cases for human review, increasing inspection efficiency
    4. Supports continuous improvement: New types of defects can be uploaded to improve the model’s accuracy

    Below are examples to illustrate the differences between anomaly detection and defect detection:

    When should you use defect detection vs anomaly detection?

    Even with this comparison, the solution does not need to be an either-or. Supervised models are useful for handling known defects and can be the primary method. At the same time, anomaly detection can be useful as a secondary method because its pass/fail approach can be useful for detecting unforeseen issues, such as a new failure mode in the material or contamination. In addition, anomaly detection models can also be useful for an initial production run, so that the model can flag defects that are fed into a supervised model for long-term use. 

    What should manufacturers test in an automated quality inspection system?

    Whatever machine vision inspection system you choose, the principles remain the same. Make sure you test for:

    • Accuracy in detection
    • Consistency to ensure low false positives
    • Granularity of defect information
    • Flexibility of the solution to set acceptability thresholds
    • Ease of use from set up to daily usage and maintenance 

    One way to also decide on a solution is to use the lean production’s overall equipment effectiveness (OEE) calculation. Below, we’ve provided an illustrated example of how to think about the two systems we’ve been describing:

    As the example illustrates, the "no labeling required" approach of anomaly detection requires more human time checking for errors. In contrast, supervised learning requires an initial investment in labelling data and setup, but can achieve high detection rates with better quality outcomes and reduced human effort.

    What should your quality team ask a machine vision provider?

    We encourage quality and shopfloor managers to speak to different automated vision inspection or quality control providers. This gives you a sense of how different companies have approached the same problem and what features may be best for your manufacturing processes. Finding a provider who can accurately describe the solution’s capabilities and limitations is key to a partnership.

    Below are some questions you can ask a vendor. Anomaly detection providers will likely not have concrete answers. But also ask them to providers focused on defect detection and focus on how they answer the questions to get a better understanding of the reliability solution.

    1. "What's your false positive/negative rate in production?" If their numbers are good, they will share, and if they don’t commit then move on.
    2. "How do I get actionable defect classification for root cause analysis?" If they give a vague answer instead of a demo of the defect message, you won’t get it on your shopfloor either.
    3. "What happens when my process changes? How much recalibration is needed?" If the number is high, they won't say. If they shrug it off with an hour or two, press them for details on how.
    4. "Can the system learn from operator feedback to improve over time?" A pure anomaly detection solution cannot do this, no matter the claims.
    5. "What's the path to migrate from anomaly detection to defect detection as I collect data?" If the provider does not offer this, maybe speak to another one.

    Anomaly detection and defect detection models each have specific strengths. In the best case, one does not replace the other and they can be complimentary. Today’s automated quality assurance solutions are more affordable than ever, with lower upfront and hardware costs. For example, Enao Vision only requires an iPhone and our free app to get started. There isn’t a singular best solution, but world-class quality control systems should deliver on precision, continuous improvement, actionable information, easy integration and maintenance. 

    Frequently asked questions about anomaly and defect detection

    What's the difference between defect detection and anomaly detection?

    Defect detection is supervised: you label known defect types (scratch, dent, missing part) and train a model to classify each one with high precision. Anomaly detection is unsupervised: the model learns what "good" looks like and flags anything unusual without labels. Defect detection delivers actionable classifications and 1 to 2 percent false positive rates with effort, while anomaly detection sets up faster but typically runs 10 to 20 percent false positives in real production conditions.

    When should I use anomaly detection instead of defect detection?

    Anomaly detection is the right starting point when you have no defect images yet, when you need a fast first deployment to catch the obvious failures, or as a secondary check on top of defect detection to flag never-before-seen failure modes like new contamination or material faults. For everything else in 2026, supervised defect detection wins on accuracy, traceability, and operator trust.

    How many labeled images does defect detection need?

    With pre-trained models and active learning, 30 to 100 labeled images per defect class are usually enough for a working first model. For stable production, plan on 200 to 500 per class plus continuous review of model mistakes. That is a fraction of the 10,000 to 100,000 images that supervised vision needed a decade ago.

    Can a defect detection model improve from operator feedback?

    Yes. Every time an operator confirms or overrides a model decision, that feedback can be added to the training set, and the model is retrained on the new data. This is the learning loop anomaly detection cannot deliver, because anomaly detection has no labels to learn from. Over months, supervised defect detection compounds in accuracy while anomaly detection plateaus.

    Key takeaways

    • Defect detection (supervised) classifies known failure modes; anomaly detection (unsupervised) flags anything unusual. Most 2026 quality teams want defect detection as their primary method.
    • Modern pre-trained models, active learning, and transfer learning have cut the labeled-data requirement from tens of thousands of images to a few hundred per class.
    • Anomaly detection's "no labels" promise is misleading: it typically runs 10 to 20 percent false positives in production, eats hundreds of operator hours per year reviewing pseudo-rejects, and cannot improve from feedback.
    • Run defect detection as the primary inspection layer and anomaly detection as a secondary safety net for unforeseen failure modes. The two are complementary, not exclusive.
    • Vendor questions worth asking: false positive rate in production, defect classification granularity, recalibration time when products change, learning loop from operator feedback, and migration path from anomaly to defect detection.

    Explore with AI

    Discuss this article with your favorite AI assistant

    Korbinian Kuusisto, CEO and founder of Enao Vision

    作者

    Korbinian Kuusisto

    CEO & Founder, Enao Vision

    We value your privacy

    We use cookies to understand how visitors use our site so we can improve it. Analytics only run if you accept. You can change your choice anytime in the footer. Privacy Policy.