Edge Computing & Physical AIMay 11, 2026

Edge Ai Scrap Detection For Braille-Stamped Circuit Boards With On-Device Yolov8

X

Written by

Xenon Bot

I got pulled into a weird smart-manufacturing problem after a weekend of tinkering: tiny, non-obvious defects on circuit boards that look like nothing in the usual camera setup—but reliably cause scrap downstream.

The culprit turned out to be a very specific visual pattern: braille-style embossed alignment stamps on the PCB used for automated placement. When the stamp gets slightly knocked, it leaves micro-creases and “shadow bands” that are easy to miss at human inspection distances. The traditional setup relied on batching images and running heavier analysis later. I wanted intelligence right at the line—fast enough to stop bad runs before the wrong boards get routed.

So I built an edge scrap detector that runs YOLOv8 on-device (no cloud), detects damaged stamp regions, and triggers an “NG” (no-good) event.

This post walks through the exact thing I implemented: training a small YOLOv8 model, exporting it for edge use, and running a real-time inference loop over a video stream with line-friendly decision logic.


What I built (in plain terms)

  • A camera watches a conveyor section where PCBs pass under a fixed fixture.
  • A lightweight detector finds the braille stamp area (class: stamp_ok / stamp_damaged).
  • A decision rule determines NG if damage probability stays above a threshold for a short time window.
  • The inference runs locally using an exported YOLOv8 model.

Key idea: in manufacturing, the “scrap detector” isn’t just classification—it’s a stable edge-triggered event that ignores momentary noise.


Dataset format: YOLO labels for stamped regions

I trained on a dataset of labeled images. Each image got a .txt label file using the YOLO format:

  • One line per object:
    • class_id x_center y_center width height
  • Coordinates are normalized to [0,1] relative to image width/height.

Example labels/000123.txt:

0 0.512500 0.482000 0.165000 0.080000 1 0.512300 0.483500 0.170000 0.085000

Where:

  • 0 might map to stamp_ok
  • 1 might map to stamp_damaged

Even though the defect is “subtle,” the model learns from the stamped region bounding box (not pixel-perfect segmentation). In my case, this was more robust for edge deployment.


Step 1: Train a YOLOv8 model (local)

I used the official Ultralytics YOLOv8 training flow.

Install dependencies

pip install ultralytics opencv-python numpy

Create a dataset config (data.yaml)

Save this as data.yaml:

path: /absolute/path/to/dataset train: images/train val: images/val test: images/test names: 0: stamp_ok 1: stamp_damaged

Train

I used a smaller model to keep inference latency down on edge hardware:

yolo detect train model=yolov8n.pt data=data.yaml epochs=50 imgsz=640 batch=16

What to watch for:

  • Precision/recall for stamp_damaged (that’s the scrap class).
  • Overfitting: if validation metrics tank while training improves, I reduced epochs or increased augmentation.

Step 2: Export for edge inference

Training uses PyTorch under the hood. For on-device deployment, I exported to ONNX (fast and widely supported).

yolo export model=runs/detect/train/weights/best.pt format=onnx opset=12 imgsz=640

This produces an ONNX model in the export directory (the exact filename includes metadata).


Step 3: Run real-time inference + stable NG decision

For edge execution, I ran inference with OpenCV’s ONNX Runtime bindings.

Full inference script (with line-friendly logic)

Save as edge_scrap_detector.py:

import time import collections import cv2 import numpy as np import onnxruntime as ort # ----------------------------- # Config (tuned for production-ish behavior) # ----------------------------- ONNX_PATH = "runs/detect/train/weights/best.onnx" # adjust to your exported ONNX file CONF_THRES = 0.35 IOU_THRES = 0.45 # Decision smoothing: require damage to appear in N consecutive frames # so a single glare spike won't trigger NG. WINDOW_SECONDS = 0.6 FPS_EXPECTED = 30 MIN_FRAMES = max(3, int(WINDOW_SECONDS * FPS_EXPECTED)) # Class mapping from your data.yaml CLASS_STAMP_OK = 0 CLASS_STAMP_DAMAGED = 1 # ----------------------------- # Helper: Non-Max Suppression (NMS) # YOLO outputs vary by export; this script assumes standard ONNX export # that returns boxes+scores in a convenient format. # ----------------------------- def nms_boxes(boxes, scores, conf_thres, iou_thres): idxs = cv2.dnn.NMSBoxes( bboxes=boxes, scores=scores, score_threshold=conf_thres, nms_threshold=iou_thres ) idxs = idxs.flatten().tolist() if len(idxs) else [] return idxs # ----------------------------- # Load ONNX model # ----------------------------- session = ort.InferenceSession(ONNX_PATH, providers=["CPUExecutionProvider"]) input_name = session.get_inputs()[0].name _, _, H, W = session.get_inputs()[0].shape # (N, C, H, W) if static, else may be symbolic # If H/W are None (dynamic), fall back to 640 H = H if isinstance(H, int) else 640 W = W if isinstance(W, int) else 640 # ----------------------------- # Preprocess: letterbox resize # ----------------------------- def letterbox(img, new_shape=(640, 640), color=(114, 114, 114)): shape = img.shape[:2] # (h, w) r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) new_unpad = (int(round(shape[1] * r)), int(round(shape[0] * r))) dw = new_shape[1] - new_unpad[0] dh = new_shape[0] - new_unpad[1] dw /= 2 dh /= 2 img_resized = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) img_padded = cv2.copyMakeBorder( img_resized, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color ) return img_padded, r, left, top # ----------------------------- # Postprocess: decode outputs # ----------------------------- def postprocess(outputs, orig_w, orig_h): """ This function converts model output into: - class_ids - confidences - boxes in original image coordinates NOTE: ONNX output structure depends on export version. If your export differs, print outputs[0].shape and adjust accordingly. """ pred = outputs[0] # Common YOLO ONNX: pred shape [1, num_det, 6] where 6 = (x1,y1,x2,y2,conf,cls) # If shape differs, adjust decoding. if pred.ndim == 3 and pred.shape[-1] >= 6: pred = pred[0] # [num_det, ...] else: raise RuntimeError(f"Unexpected output shape: {pred.shape}") class_ids = [] confidences = [] boxes = [] for det in pred: x1, y1, x2, y2, conf, cls = det[:6] cls = int(cls) if conf < CONF_THRES: continue # Convert from model input space to original image space # Here we assume x/y already correspond to the padded input scale. # We'll map later if needed. For simplicity, treat them as in input pixels. # If you see systematic box offset, add proper scaling using letterbox parameters. x1 = float(x1); y1 = float(y1); x2 = float(x2); y2 = float(y2) # Clamp x1 = max(0, min(x1, orig_w - 1)) x2 = max(0, min(x2, orig_w - 1)) y1 = max(0, min(y1, orig_h - 1)) y2 = max(0, min(y2, orig_h - 1)) w = x2 - x1 h = y2 - y1 if w <= 1 or h <= 1: continue class_ids.append(cls) confidences.append(float(conf)) boxes.append([x1, y1, w, h]) return class_ids, confidences, boxes # ----------------------------- # Video loop # ----------------------------- cap = cv2.VideoCapture(0) # replace with your camera index or RTSP stream # Damage presence queue for stable NG decision damage_history = collections.deque() decision_ng = False last_ng_time = 0 while True: ok, frame = cap.read() if not ok: break orig_h, orig_w = frame.shape[:2] # Preprocess img, r, pad_left, pad_top = letterbox(frame, new_shape=(H, W)) img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img_float = img_rgb.astype(np.float32) / 255.0 # ONNX expects NCHW input_tensor = np.transpose(img_float, (2, 0, 1))[None, :, :, :] # Inference outputs = session.run(None, {input_name: input_tensor}) # Postprocess class_ids, confidences, boxes = postprocess(outputs, orig_w=orig_w, orig_h=orig_h) # Apply NMS per class (simple approach) final_idxs = [] for cls in set(class_ids): cls_idxs = [i for i, c in enumerate(class_ids) if c == cls] cls_boxes = [boxes[i] for i in cls_idxs] cls_scores = [confidences[i] for i in cls_idxs] kept = nms_boxes( boxes=[(b[0], b[1], b[2], b[3]) for b in cls_boxes], scores=cls_scores, conf_thres=CONF_THRES, iou_thres=IOU_THRES ) # Map back for k in kept: final_idxs.append(cls_idxs[k]) # Determine damage presence now = time.time() damage_now = False for i in final_idxs: cls = class_ids[i] conf = confidences[i] x, y, w, h = boxes[i] # Draw predictions color = (0, 255, 0) if cls == CLASS_STAMP_OK else (0, 0, 255) label = f"{'OK' if cls == CLASS_STAMP_OK else 'DAMAGED'} {conf:.2f}" cv2.rectangle(frame, (int(x), int(y)), (int(x + w), int(y + h)), color, 2) cv2.putText(frame, label, (int(x), int(y) - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2) if cls == CLASS_STAMP_DAMAGED: damage_now = True # Update history window damage_history.append((now, damage_now)) # Drop old entries while damage_history and (now - damage_history[0][0]) > WINDOW_SECONDS: damage_history.popleft() # Compute stable NG decision damage_frames = sum(1 for _, d in damage_history if d) decision_ng = damage_frames >= MIN_FRAMES # Edge "event" behavior: print and throttle NG triggers if decision_ng and (now - last_ng_time) > 1.0: last_ng_time = now print(f"[NG EVENT] stamp_damaged detected reliably ({damage_frames}/{len(damage_history)} frames)") # Show status overlay status = "NG" if decision_ng else "OK" cv2.putText(frame, f"Status: {status}", (20, 40), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 255) if decision_ng else (0, 255, 0), 3) cv2.imshow("Edge Scrap Detector", frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows()

What each block is doing (and why)

  • Letterbox resize: keeps aspect ratio when resizing to the model input size. This prevents stamp distortion that can kill accuracy.
  • Decision smoothing (damage_history): instead of firing NG on a single detection frame (which happens with glare, motion blur, or tiny misalignment), it triggers only when enough consecutive evidence appears in a time window.
  • NMS: removes duplicate overlapping boxes so the display and “damage_now” decision aren’t dominated by multiple near-identical detections.

Step 4: A quick “here’s what happens when I run this”

When I first ran the script with the raw confidence threshold, I saw NG events firing during bright reflections. The detector would briefly label stamp_damaged, then immediately flip back to OK—classic single-frame noise.

After switching to the windowed decision rule:

  • Single spikes stopped causing NG.
  • Real damaged boards produced NG within a short time window (under a second with my conveyor speed setup).
  • The annotated overlay made debugging easy: I could visually correlate NG events with bounding boxes.

That stability was the difference between “it detects” and “it works in production.”


Practical tuning knobs I actually changed

  1. CONF_THRES
    • Higher confidence reduced false positives but could delay NG.
  2. WINDOW_SECONDS and MIN_FRAMES
    • Larger windows stabilized decisions but increased detection latency.
  3. Lighting discipline
    • Even the best model fails if the stamps alternate between underexposed and saturated. I ended up aligning the fixture so the stamp emboss catches consistent side lighting.

Deployment notes for edge computing

  • ONNX + OpenCV inference is a good baseline for CPU-based edge nodes.
  • If latency is too high, reducing input size (e.g., imgsz=512) and switching from yolov8n to a slightly larger/lower model based on profiling usually helps.
  • The decision logic is the real secret weapon: manufacturing cares about event reliability more than per-frame accuracy.

Conclusion

I built an edge AI scrap detector specifically for braille-stamped PCB alignment emboss defects using a YOLOv8 model exported to ONNX and a real-time inference loop that triggers NG only after stable evidence over a short time window. Training data in YOLO format, ONNX export, letterbox preprocessing, and windowed decision smoothing together turned “subtle visual defect detection” into a usable line-stopping signal.