The research, described in this paper, concerns the robot indoor navigation, emphasizing the aspects of sensor model and calibration, environment representation, and self-localization. The main point is that combining all of these aspects, an effective navigation system is obtained. We present a model of the catadioptric image formation process. Our model simplifies the operations needed in the catadioptric image process. Once we know the model of the catadioptric sensor, we have to calibrate it with respect to the other sensors of the robot, in order to be able to fuse their information. When the sensors are mounted on a robot arm, we can use the hand-eye calibration algorithm to calibrate them. In our case the sensors are mounted on a mobile robot that moves over a flat floor, thus the sensors have less degrees of freedom. For this reason we develop a calibration algorithm for sensors mounted on a mobile robot. Finally, combining all the previous results and a scan matching algorithm that we develop, we build 3D maps of the environment. These maps are used for the self-localization of the robot and to carry out path following tasks. In this work we present experiments which show the effectiveness of the proposed algorithms.