How can a robot see when it’s about to bump into something? Well, technically it can’t because robots don’t have eyes, silly. But they do have the ability to calculate where they are in space through a combination of technologies. So, while it’s not actually sight, it is pretty close and, when you think about it, also incredibly cool. After all, without any kind of vision, even the smartest robot is useless, isn’t it?
And this is why robotics remains one of the biggest technology challenges today. It requires a combination of many different areas of expertise – from multiple disciplines of robotics, control, mechanical and electrical engineering, to skills in software development, materials, mechatronics and more. Each element must operate smoothly within the whole and it’s a delicate balance. The way the robot moves, for example, is dictated by the materials used to build it and how each of these components operate together, how they are made to move and how they are powered. But if you want a robot to move independently, how can it do so unless it has a way to ‘see’ where it is going?
Of course, optical and imaging technologies are what we do best, so it will come as no surprise that Canon has been working on this particular area of robotics for quite some time. Over thirty years, in fact. Today, it’s called Visual SLAM, which is ironic considering that ‘slamming’ into anything is precisely what the technology prevents. SLAM or ‘Simultaneous Localisation and Mapping’ (which is less catchy but certainly more accurate), is a technology that simultaneously estimates the position and structure of a robot’s surrounding environment. The original Visual SLAM system was created to merge real and virtual worlds in a head-mounted display, which we now know as Mixed Reality. Today, Visual SLAM could be used across all kinds of automated tasks in industries from manufacturing and hospitality to healthcare and construction.