2017-06-16

SLAM!

OctoMY™ has a huge ambition when it comes to the way the agent will localize itself and map environment.

OctoMY™ SLAM (Simultaneous Localization And Mapping)
When a robot is looking at it's environment through it's sensors, it will get a pattern of data. It then has to process this pattern to identify it's location in the environment, and also accumulate meaningful information about the structure of this environment for future use (creating a map).

The data could be as simple as the readings of an ultrasonic range finder or as complex as the video streams from a stereoscopic pair of cameras. And the algorithms an heuristics that are used to derive meaningful localization and mapping data is loosely fit under the term SLAM (Simultaneous Localization And Mapping).

Stereo Camera Sensor

And there are about as many approaches to SLAM as there are implementations out there. This is a really hard set of problems with no "best" solutions. Many of the implementations are crafted for specific sets of performance, constraints and trade-offs such as

  • Available processing power ("must run in real-time on smartphone").
  • Indoor only or outdoor as well.
  • 2D or 3D mapping
  • Sparsity/size of map data
  • Accuracy of localization
  • Loop detection (detection and resolution of circular pathways in environment).
  • Scalability with size of environment
  • Must be GPU/FPGA/DSP accelerable.
  • etc.
So now that you have a grasp on what SLAM is, it's time to move on to how this is planned for OctoMY™.

Since we are basically building a platform on top of which we hope you will add all the cool plugins and features, there really arn't any limitations to what kind of SLAM methods can become part of OctoMY™. But since SLAM is such an essential concept, we will have to provide a pretty darn good default implementation from the get-go that you can get started with right away.

Nothing is set in stone yet, but here are some of the thoughts about SLAM in OctoMY™:

  • Should be unsupervised as far as possible
    • Automatic camera lens/other sensor calibration
    • Automatic environment scale detection
    • Automatic tweaking of parameters to get the best result in general
  • Should generate detailed and accurate 3D map of high accuracy.
  • Should support huge maps
  • Should support distributed data model that allows  sharing of data between agents all over the world.
  • Should support input from whatever sensors are available, and make the most of it.
  • Should adapt to the processing hardware available:
    • Make use of hardware acceleration through OpenCL when available.
    • Use less resources on lower-end platforms such as old smart phones.
  • Should use less hand-crafted and more "learned" methods.
In conclusion, yes ambitions are sky high for the central problem of localization and mapping of environment. In later articles we will surely explore parts of this problem in more detail.


No comments:

Post a Comment