This paper presents a way to achieve robot positioning using visual information from cameras placed in the environment. The goal is to obtain both the global position and a not fixed amount of features from the robot. There is a defined algorithm that implements 3D reconstruction at the same time the position of the robot is updated. The problem statement is equivalent to visual SLAM process and therefore all definitions are made in a top-down Bayesian process. This document presents a novel study of robot positioning simultaneous with 3D reconstruction, with one camera and unknown robot landmarks, which is easily expanded to a multicamera process. It's assumed that odometric information is always present in the system, so it is used in both estimation and initialization phases.
|