Computer Vision for Defect Detection: How Small Manufacturers Are Doing It Without Million-Dollar Budgets
Walk into any large automotive or electronics factory and you’ll see sophisticated machine vision systems inspecting every product on the line. Multiple cameras, precision optics, custom lighting rigs, dedicated processing hardware — systems that cost $200,000 to $500,000 per inspection point. For big manufacturers running millions of units, the ROI is obvious.
But what about the manufacturer running a production line in Dandenong with 40 staff, producing 5,000 units a day? Until recently, automated visual inspection wasn’t realistic at that scale. The equipment was too expensive, the setup was too complex, and you needed specialised machine vision engineers to maintain it.
That’s genuinely changing. And it’s not because the technology got slightly cheaper — it’s because a fundamentally different approach has emerged.
The Old Way vs. The New Way
Traditional machine vision systems use rule-based algorithms. An engineer programs specific rules: “reject if this dimension exceeds 12.5mm,” “flag if colour value falls outside this RGB range,” “fail if the surface has a scratch longer than 2mm.” This works brilliantly for well-defined, consistent defects. But it’s brittle. Every new product, every new defect type, every change in lighting or positioning requires reprogramming.
The new approach uses deep learning — specifically convolutional neural networks trained on images of good and defective products. Instead of programming rules, you show the system thousands of examples. “These are good products. These are defective.” The model learns to distinguish between them, and it generalises to defects it hasn’t explicitly been shown, as long as they’re visually similar to the training data.
This matters for small manufacturers because it dramatically reduces the expertise needed to set up and maintain the system. You don’t need a machine vision engineer writing custom algorithms. You need someone who can collect and label images.
What a Realistic Implementation Looks Like
Let’s walk through what a small Australian manufacturer — say, a plastics injection moulder or a metalwork fabricator — would actually need.
Camera hardware. You don’t need $20,000 industrial cameras. For many defect detection applications, industrial cameras in the $1,500-$4,000 range provide sufficient resolution. Companies like FLIR (now part of Teledyne), Basler, and IDS Imaging offer cameras specifically designed for industrial inspection at these price points. For some applications, even high-end webcams or machine vision cameras under $1,000 can work.
Lighting. This is the part that’s easy to underestimate. Consistent, controlled lighting is arguably more important than the camera itself. A scratch that’s clearly visible under angled light disappears under diffuse light. The good news is that LED ring lights, backlighting panels, and dome lights designed for machine vision inspection are available for $300-$1,500 per unit. Getting the lighting right often takes more experimentation than any other part of the setup.
Processing. An NVIDIA Jetson module — the Orin Nano costs around $500 — can run inference on trained models in real time for most production line speeds. For higher throughput, the Jetson Orin NX at around $1,000 handles more demanding workloads. These are small, fanless units that mount directly near the camera on the production line.
Software platform. This is where the market has really opened up. Platforms like Roboflow, Landing AI (Andrew Ng’s company), and Neurala provide end-to-end workflows for training, deploying, and managing computer vision models without requiring deep ML expertise. Pricing typically ranges from $500-$2,000 per month, and some offer free tiers for smaller deployments.
Total cost for a single inspection point: $5,000-$15,000 in hardware and setup, plus $500-$2,000 per month in software. Compare that to $200,000+ for a traditional machine vision system.
The Training Data Challenge
Here’s the catch. Deep learning models need training data, and specifically they need examples of defects. If you’re a small manufacturer, you might not have a library of defect images sitting around.
The practical approach is this: run the camera system for two to four weeks without any AI, simply capturing images of everything that comes through. Have your existing quality inspectors flag which products are defective and what type of defect each one has. That gives you your initial training dataset.
You’ll need a minimum of about 200-500 images of each defect type to get reasonable accuracy. For some applications, transfer learning — starting with a model pre-trained on similar industrial inspection tasks and fine-tuning on your specific products — can reduce this to as few as 50-100 images.
One Australian metal fabricator I’m aware of worked with an Australian AI company to build a defect detection system that started with just 300 labelled images. Within three months of continuous operation and iterative training (the model improves as it sees more examples), their detection accuracy reached 97.5% — matching their best human inspectors and significantly outperforming their average ones.
Real-World Results from Australian SMEs
A plastics manufacturer in Melbourne’s south-east implemented computer vision inspection on their packaging line. Their numbers after six months:
- Defect detection rate improved from 88% (human inspection) to 96.8% (AI plus human review of flagged items)
- False rejection rate decreased from 4.2% to 1.1% — meaning less good product was being thrown away
- The system paid for itself in 4.5 months through reduced waste and fewer customer returns
- Inspection throughput increased, allowing them to speed up the production line by 12%
A sheet metal fabricator in western Sydney saw similar results on their laser-cut parts inspection. They were particularly impressed by the system’s ability to detect subtle dimensional variations that their inspectors consistently missed — not because the inspectors were careless, but because the variations were below the threshold of reliable human visual detection.
Practical Advice for Getting Started
If you’re a small manufacturer considering computer vision for quality control, here’s a realistic path.
Pick one product line and one defect type. Don’t try to solve every quality problem at once. Choose the product with the highest defect rate or the highest cost of defects getting through. Start there.
Get the lighting right before you worry about the AI. Take sample images with different lighting setups and check whether the defects you care about are clearly visible in the images. If a human can’t see the defect in the image, the AI won’t find it either.
Budget for iteration. Your first model won’t be perfect. Plan for two to three rounds of improvement over the first three to six months as you collect more training data and refine the system. The model gets better over time — that’s the whole point.
Keep humans in the loop initially. Don’t automate rejection on day one. Start with the AI flagging suspected defects for human review. This builds trust, catches errors, and provides feedback data to improve the model. Once accuracy is proven, you can gradually increase automation.
The technology is genuinely accessible now. The barrier isn’t cost or complexity — it’s awareness that these tools exist at price points that make sense for smaller operations. That gap is closing fast.