It’s not what you look at that matters, it’s what you see.
Henry David Thoreau
Meanwhile having a stronger chassis, and now that the body is moving we need the robot to be able to “see” where it is going. A standard Raspberry Pi Camera will be Nenemeni’s eyes. There are three autonomous challenges and the camera will be very useful to solve those challenges, so we though a Pan Tilt will be the best option and the module from Pimoroni is just perfect.
Furthermore, it is straight forward to make it work with their library and example codes as above. Now, in order to start testing more complex programming using openCV, we install it on our “CAD” chassis.
Every time I start thinking about “autonomous robots”, I can’t stop thinking about the very first chapter I wrote for my PhD dissertation, here a couple of lines…
Autonomous navigation in unknown environments has been the focus of attention in the mobile robotics community for the last three decades. When neither the location of the robot nor a map of the region are known, localisation and mapping are two tasks that are highly inter-dependent and must be performed concurrently. This problem, is known as Simultaneous Localisation and Mapping (SLAM).
In order to gather accurate information about the environment, mobile robots are equipped with a variety of sensors (e.g. laser, vision, sonar, odometer, GPS), that together form a perception system, that allows accurate localisation and reconstruction of reliable and consistent representations of the environment. Vision sensors give mobile robots relatively cheap means of obtaining rich 3D information on their environment, but lack the depth information that laser range finders can provide.
Knowing this, next step is to install and test some ToF sensors… keep reading the next post !