The following list contains additional information and comments about the project.
Mon Feb 20 13:53:49 CST 2006: Assignment opened
This page describes the second assignment for the course 74.795-L01 Mobile Robotics Using Local Vision.
The goal of this assignment is to implement a practical vision-based localization methodology for a small intelligent mobile robot. This assignment consists of two parts:
Visual Localization: The robot has to accurately determine its position using landmark based localization (e.g., triangulation of known target points), relative localization (e.g., tracking of feature lines), and other methods.
Visual Ego-motion Estimation: The robot has to use vision feedback to estimate its motion in an unknown environment. estimation.
We setup a line world in the Autonmous Agent's lab in EITC E2 504.
Some parts of the environment are marked with purple line segments. There are also four extra localization markers placed in the environment. The localization markes are round two coloured 30cm tall poles. The poles A1 and A2 are red on top of yellow and the poles B1 and B2 have yellow on top of red.
This assignment can be broken down into two parts: (a) vision-based ego-motion estimation and (b) localization.
Firstly, you need to implement an optical flow tracker. You can use a region-based (Horn-Schnuck) one or a feature based tracker (LKT) to track the motion of areas in the image.
Given the feedback from the optical flow, your robot needs to estimate its motion relative to the environment. By comparing intergrating this motion, your robot should be able to follow the predefined triangle path.
Given the fact that localization markers are present in the domain, your algorithm can use landmark based localization (i.e., triangulate its position from the position of at least two known landmarks) as well as integrating its motion estimate from feedback of the motion of other features (e.g., lines) in the image.
The winner of this assignment will be determined on a race which will take place on the 13th March 2006 at 17:00 in the Autonomous Agents lab.
The race will consist of two events: Treasure Hunt and Roaming Rover.
Treasure Hunt: the robot will be given five target locations in the line world. Note that the target locations may not be visibly marked. The robot has to move to these five target locations and indicate to the referee when it thinks it is at one of the target locations. The distance from the robot to the actual target location is used to calculate the accuracy score of the robot.
Roaming Rover: the robot has to follow the simple path shown below. The robot has to stop at each track point and the accuracy at this point will be measured. The score of the robot is the final accuracy of the robot after three laps around the track.
The score of a robot is determined by the sum of the scores for the two events.
Each robot is assigned a scale factor. The scale factor is used to compensate for the fact that it is more difficult for a smaller robot to drive at the same speed as a larger robot. The scale factor is determined by the maximum dimension of the robot.
The scale factor is calculated as (Max Dimension of Robot)/10cm.
This means, a robot with a size of 10cm is assigned a scale factor of 1.0; a robot with a maximum dimension of 5cm is assigned a scale factor of 0.5 and a robot with a maximum dimension of 40cm is assigned a scale factor of 4.
The scale time of a robot is determined by (Time to complete three laps + penalty time) * scale factor.
The robot with the best (lowest) scale time is declared the winner.
Page was last updated on %s at %s.", $mod_date, $mod_time); ?>