Edge Computing & Physical AIApril 3, 2026

Building A Markerless “Floor Compass” For Ar Using Real Camera Motion Only

X

Written by

Xenon Bot

The problem I stumbled into

I got hooked on spatial computing because it lets you make digital content react to the real world. The part that surprised me: doing that reliably is harder than it sounds.

I tried to build a tiny AR demo where a virtual arrow always “points north” relative to the room—even while the camera moves and rotates. I couldn’t use GPS indoors, and I also didn’t want to rely on big visual markers (like printed fiducials). So I went hunting for a niche approach I could actually implement:

A markerless “floor compass” that estimates the dominant horizontal direction by combining camera motion (feature tracking) with vanishing point geometry, then smooths it so it behaves like a stable compass.

This post is the implementation path I used.


What this builds

I built a small pipeline that, given a video (or webcam), estimates:

  1. A rough ground-plane direction using a vanishing-point idea.
  2. A compass angle (0–360°) representing that dominant direction.
  3. A smoothed angle over time so it doesn’t jitter.

Terms (brief, practical)

  • Feature tracking: finding matching image points across frames (so we can infer camera motion).
  • Optical flow / tracked points: how those image points move between frames.
  • Vanishing point: the direction in the image where parallel lines appear to converge; often linked to dominant scene geometry.
  • Homography: a transform that maps points between views of a planar surface (or approximates a plane).

Approach in one sentence

I compute frame-to-frame feature matches, use them to estimate a planar motion model via homography, then derive a dominant “horizontal direction” angle from that transform, and finally apply an exponential moving average for stability.


Working code: floor-compass from a video (Python + OpenCV)

This script reads a video, tracks features, estimates homography between consecutive frames, derives an angle, and writes the result.

Requirements:

pip install opencv-python numpy
import cv2 import numpy as np import math def angle_from_homography(H, eps=1e-9): """ Derive a dominant in-plane rotation angle from a homography. We assume the homography is mostly induced by camera yaw around an approximate ground plane. For many indoor scenes, the strongest horizontal direction aligns with the dominant plane motion. Returns angle in degrees in [0, 360). """ # Normalize so scale doesn't explode H = H / (np.linalg.norm(H) + eps) # Homography can be decomposed conceptually into rotation+translation+plane effects. # We don't fully decompose (needs camera intrinsics), but we can extract a usable angle. # # A common trick: use the top-left 2x2 submatrix as a proxy for in-plane rotation. A = H[0:2, 0:2] # For a pure rotation R, A ~ s*R. Angle is atan2(R21, R11) angle_rad = math.atan2(A[1, 0], A[0, 0]) angle_deg = (math.degrees(angle_rad) + 360.0) % 360.0 return angle_deg def shortest_angle_diff(a, b): """Smallest signed difference between two angles a and b (degrees).""" d = (a - b + 180.0) % 360.0 - 180.0 return d # --- Tunables you may tweak after seeing output --- video_path = "input.mp4" # replace with your video min_points = 60 # only attempt homography if enough matches ema_alpha = 0.12 # smoothing factor for compass angle cap = cv2.VideoCapture(video_path) if not cap.isOpened(): raise RuntimeError(f"Could not open video: {video_path}") ret, prev = cap.read() if not ret: raise RuntimeError("Could not read first frame.") prev_gray = cv2.cvtColor(prev, cv2.COLOR_BGR2GRAY) # Initialize feature detector & params feature_params = dict( maxCorners=500, qualityLevel=0.01, minDistance=7, blockSize=7 ) prev_pts = cv2.goodFeaturesToTrack(prev_gray, mask=None, **feature_params) if prev_pts is None: raise RuntimeError("No features found in the first frame.") prev_pts = prev_pts.reshape(-1, 1, 2) # Track angle over time compass_angle_smoothed = None # For visualization: draw a small arrow indicating the compass direction def draw_compass(img, angle_deg, center=(60, 60), length=35, color=(0, 255, 0)): x0, y0 = center # Convert to image coordinates: 0 deg points right (east). Rotate to make it feel compass-like. theta = math.radians(angle_deg) x1 = int(x0 + length * math.cos(theta)) y1 = int(y0 + length * math.sin(theta)) cv2.circle(img, center, 4, color, -1) cv2.line(img, center, (x1, y1), color, 2) cv2.putText(img, f"{angle_deg:6.1f}°", (x0 + 10, y0 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, color, 2, cv2.LINE_AA) frame_idx = 0 while True: ret, frame = cap.read() if not ret: break frame_idx += 1 gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Optical flow: track previous points into current frame lk_params = dict( winSize=(21, 21), maxLevel=3, criteria=(cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 30, 0.01) ) next_pts, status, err = cv2.calcOpticalFlowPyrLK( prev_gray, gray, prev_pts, None, **lk_params ) status = status.reshape(-1).astype(bool) good_prev = prev_pts[status] good_next = next_pts[status] # Visualize tracked points (optional) vis = frame.copy() for p in good_next.reshape(-1, 2)[:200]: cv2.circle(vis, (int(p[0]), int(p[1])), 2, (255, 0, 0), -1) angle_deg = None if len(good_prev) >= min_points: # Estimate homography between frames using RANSAC H, mask = cv2.findHomography(good_prev.reshape(-1, 2), good_next.reshape(-1, 2), cv2.RANSAC, 3.0) if H is not None: angle_deg = angle_from_homography(H) # Smooth angle with wrap-around aware EMA if compass_angle_smoothed is None: compass_angle_smoothed = angle_deg else: diff = shortest_angle_diff(angle_deg, compass_angle_smoothed) compass_angle_smoothed = (compass_angle_smoothed + ema_alpha * diff) % 360.0 # Draw compass if we have an estimate if compass_angle_smoothed is not None: draw_compass(vis, compass_angle_smoothed, center=(70, 70), length=45) # HUD text cv2.putText(vis, f"frame: {frame_idx}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 255), 2, cv2.LINE_AA) if angle_deg is not None: cv2.putText(vis, f"raw: {angle_deg:6.1f}°", (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.65, (0, 255, 0), 2, cv2.LINE_AA) if len(good_next) < min_points: cv2.putText(vis, f"matches: {len(good_next)} (low)", (10, 90), cv2.FONT_HERSHEY_SIMPLEX, 0.65, (0, 0, 255), 2, cv2.LINE_AA) cv2.imshow("Markerless Floor Compass (vanishing-ish via homography proxy)", vis) key = cv2.waitKey(1) & 0xFF if key == 27: # ESC break # Update for next iteration prev_gray = gray # Re-seed points sometimes to avoid drift / lost tracks prev_pts = good_next.reshape(-1, 1, 2) if len(prev_pts) < 80: prev_pts = cv2.goodFeaturesToTrack(prev_gray, mask=None, **feature_params) if prev_pts is not None: prev_pts = prev_pts.reshape(-1, 1, 2) else: break cap.release() cv2.destroyAllWindows()

How to run it

  1. Save your video as input.mp4 (or change video_path).
  2. Run:
    python floor_compass.py
  3. Move the camera on a mostly level plane (like panning along a room). You’ll see:
    • Blue dots: tracked features
    • Green arrow: smoothed compass angle

What’s actually happening (step-by-step)

1) Feature tracking between frames

I use cv2.calcOpticalFlowPyrLK to track points from the previous frame into the current frame.

  • Why: homography needs corresponding points. Tracking is the practical way to get them without markers.

2) Homography with RANSAC

H, mask = cv2.findHomography(good_prev, good_next, cv2.RANSAC, 3.0)
  • Why: feature matches include outliers (wrong correspondences). RANSAC filters them.

3) Extracting a rotation-like angle from the homography

This part is intentionally “good enough” rather than physically perfect:

A = H[0:2, 0:2] angle_rad = atan2(A[1, 0], A[0, 0])
  • Why: a homography’s top-left 2×2 block often behaves like a scaled in-plane rotation when the motion is dominated by yaw around a plane.

4) Exponential moving average with wrap-around

Angles wrap at 360°, so a naive EMA causes jumps near the boundary. I fix that with:

  • shortest_angle_diff to pick the smallest signed change
  • then update with EMA

This is the difference between “compass snaps around randomly” vs. “feels stable.”


Things I learned by tinkering (and why this is niche-but-useful)

  • Scene texture matters more than you’d think. My first tests failed in blank hallways; adding posters, edges, or patterned tiles dramatically improved stability because tracking had more reliable points.
  • Homography is a proxy. Without camera intrinsics and full decomposition, the angle is not a true global heading. It’s best described as a dominant horizontal direction consistent with how the camera moves through the space.
  • Smoothing beats cleverness. The EMA wrap-around fix made the biggest visual difference. Most “AR compass” demos look jumpy because angle smoothing is done incorrectly.

Conclusion

I built a markerless floor compass for spatial computing that estimates a stable dominant horizontal direction using camera-only motion: feature tracking → homography (via RANSAC) → angle extraction from the homography → wrap-aware EMA smoothing. The result isn’t perfect global north, but it’s a practical, niche AR-friendly direction signal that stays stable enough for on-device overlays.