Students Final Project Presentation - Robotics Developer Masterclass
Why It Matters
The demo proves that open‑source robotics tools can deliver real‑time, vision‑guided interaction, accelerating prototyping for education and research in human‑robot collaboration.
Key Takeaways
- •Integrated ROS, OpenCV, and MoveIt to build tic‑tac‑toe robot.
- •Used Gazebo simulation and RViz for testing before real deployment.
- •Fixed camera and board positions simplified perception but lighting remained problematic.
- •Implemented separate vertical/horizontal line detection and clustering to isolate grid.
- •Shape detection relied on contour solidity analysis to differentiate X and O.
Summary
The final project presentation of the Robotics Developer Masterclass showcased Aaron Emer’s "tic‑tac‑toe bot," a robotic arm that plays tic‑tac‑toe against a human opponent using computer vision and motion planning. The system combines the ROS framework, OpenCV for perception, MoveIt for trajectory execution, and a Vue.js web interface for live control, with extensive testing in Gazebo simulation and RViz visualization before moving to a real‑world setup in a Spanish lab.
Emer outlined a four‑module architecture: an order module that issues commands, a perception module that extracts the board state from camera images, a decision module that computes the optimal move, and a trajectory module that drives the arm. The robot draws the grid, places Xs, and reacts to human‑drawn Os, while the web app streams camera feeds and visualizes detected lines and shapes. Key technical hurdles included variable lighting, overlapping grid lines, and mis‑detections when human marks were near the board edges.
During the live demo, the robot successfully traced the grid and placed its first X, but struggled to recognize a circle placed close to a grid line, illustrating the limits of the contour‑based detection pipeline. Emer explained that separating vertical and horizontal line detection, applying Sobel filters, and clustering line candidates reduced noise, while solidity analysis of contours helped distinguish Xs from Os.
The project demonstrates how a modular ROS stack can integrate perception, planning, and actuation for interactive human‑robot games, offering a template for educational robotics and a testbed for advancing real‑time vision‑driven manipulation. It highlights the importance of robust preprocessing and fixed‑environment assumptions when deploying vision systems in variable lighting conditions.
Comments
Want to join the conversation?
Loading comments...