An autonomous system able to construct its own navigation strategy for mobile robots is proposed. The navigation strategy is molded from navigation experiences (succeeding as the robot navigates) according to a classical reinforcement learning procedure. The autonomous system is based on modular hierarchical neural networks. Initially, the navigation performance is poor (many collisions occur). Computer simulations show that after a period of learning, the autonomous system generates efficient obstacle avoidance and target seeking behaviors. Experiments also offer support for concluding that the autonomous system develops a variety of object discrimination capability and of spatial concepts.